This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Anchore Enterprise Documentation

Welcome to Anchore Enterprise

Anchore Enterprise is an software bill of materials (SBOM) - powered software supply chain management solution designed for a cloud-native world. It provides continuous visibility into supply chain security risks. Anchore Enterprise takes a developer-friendly approach that minimizes friction by embedding automation into development toolchains to generate SBOMs and accurately identify vulnerabilities, malware, misconfigurations, and secrets for faster remediation.

Start by going to the Overview of Anchore Enterprise to learn more about the basic concepts and functions..

For information about deploying and operating an Anchore Enterprise instance:

For information on using Anchore Enterprise for security and compliance workflows:

Reference information:

Other:

Note: Many topics have nested sub-topics in the navigation pane to the left that become visible when you click a topic.

1 - Overview of Anchore Enterprise

What is Anchore Enterprise?

Anchore Enterprise is a software bill of materials (SBOM) - powered software supply chain management solution designed for a cloud-native world. It provides continuous visibility into supply chain security risks. Anchore Enterprise takes a developer-friendly approach that minimizes friction by embedding automation into development toolchains to generate SBOMs and accurately identify vulnerabilities, malware, misconfigurations, and secrets for faster remediation.

alt text

Gaining Visibility with SBOMs

Anchore Enterprise generates detailed SBOMs at each step in the development process, providing a complete inventory of the software components including the direct and transitive dependencies you use. Anchore Enterprise stores all SBOMs in a SBOM repository to enable ongoing monitoring of your software for new or zero-day vulnerabilities that can arise even post-deployment.

Anchore Enterprise also detects SBOM drift in the build process, issuing an alert for changes in SBOMs so they can be assessed for risk, malware, compromised software, and malicious activity.

Identifying Vulnerability and Security Issues

Starting with the SBOM, Anchore Enterprise uses multiple vulnerability feeds along with a precision vulnerability matching algorithm to pinpoint relevant vulnerabilities and minimize false positives. Anchore Enterprise also identifies malware, cryptominers, secrets, misconfigurations, and other security issues.

Automating through Policies

Anchore Enterprise includes a powerful policy engine that enables you to define guardrails and automate compliance with industry standards or internal rules. Using Anchore’s customizable policies, you can automatically identify the security issues that you care about and alert developers or create policy gates for critical issues.

1.1 - Anchore Enterprise Capabilities

SBOM Generation and Management

A software bill of materials (SBOM), is the foundational element that powers Anchore Enterprise’s secure management of the software supply chain. Anchore Enterprise automatically generates and analyzes comprehensive SBOMs at each step of the development lifecycle. SBOMS are stored in a repository to provide visibility into software components and dependencies as well as continuous monitoring for new vulnerabilities and risks throughout the development process and post-deployment. See SBOM Generation and Management for more information.

About Anchore Enterprise SBOMs

An SBOM is a list of software components and relevant metadata that includes packages, code-snippets, licenses, configurations, and other elements of an application.

Anchore Enterprise generates high-fidelity SBOMs by scanning container images and source code repositories. Anchore’s native SBOM format includes a rich set of metadata that is a superset of data included in SBOM standards such as SPDX and CycloneDX. Using this additional level of metadata, Anchore can identify secrets, file permissions, misconfiguration, malware, insecure practices, and more.

Anchore Enterprise SBOMs identify:

  • Open source dependencies including ecosystem type (OS, language, and other metadata)
  • Nested dependencies in archive files (WAR files, JAR files and more)
  • Package details such as name, version, creator, and license information
  • Filesystem metadata such as the file name, size, permissions, creation time, modification time, and hashes
  • Malware
  • Secrets, keys, and credentials

Anchore Enterprise supported ecosystems

Anchore Enterprise supports the following packaging ecosystems when identifying SBOM content. The Operating System category captures Linux packaging ecosystems. The Binary detector will inspect content to identify binaries that were installed outside of packaging ecosystems.

  • Operating System
    • RPM
    • DEB
    • APK
  • Python
  • Java
  • NuGet
  • Golang
  • Binaries
    • Apache httpd
    • BusyBox
    • Consul
    • Golang
    • HAProxy
    • Helm
    • Java
    • Memcached
    • Nodejs
    • PHP
    • Perl
    • PostgreSQL
    • Python
    • Redis
    • Rust
    • Traefik

How Anchore Enterprise Uses SBOMs

Identify Vulnerabilities and Risk for Remediation

Anchore Enterprise generates detailed SBOMs at each stage of the software development lifecycle and stores them in a centralized repository to provide visibility into components and open source dependencies. These SBOMs are analyzed for vulnerabilities, malware, secrets (embedded passwords and credentials), misconfigurations, and other risks. Because SBOMs are stored in a repository, users can then continually monitor SBOMs for new vulnerabilities that arise, even post-deployment.

Detect SBOM Drift

Anchore Enterprise detects SBOM drift in the build process, identifying changes in SBOMs so they can be assessed for new risks or malicious activity. Users can set policy rules that alert them when components are added, changed, or removed so that they can quickly identify new vulnerabilities, developer errors, or malicious efforts to infiltrate builds. See SBOM Drift for more information.

Meet Compliance Requirements

Using the Anchore Enterprise UI or API, users can review SBOMs, generate reports, and export SBOMs as a JSON file. Anchore Enterprise can also export aggregated SBOMs for entire applications that can then be shared externally to meet customer and federal compliance requirements.

Customized Policy Rules

Anchore Enterprise’s high-fidelity SBOMs provide users with a rich set of metadata that can be used in customized policies.

Reduce False Positives

The extensive information provided in SBOMs generated by Anchore Enterprise allows for more accurate vulnerability matching for higher accuracy and reduced false positives.

Vulnerability and Security Scanning

Vulnerability and security scanning is an essential part of any vulnerability management strategy. Anchore Enterprise enables you to scan for vulnerabilities and security risks at any stage of your software development process, including source code repositories, CI/CD pipelines, container registries, and container runtime environments. By scanning at each stage in the process, you will find vulnerabilities and other security risks earlier and avoid delaying software delivery.

Continuous Scanning and Analysis

Anchore Enterprise provides continuous and automated scanning of an application’s code base, including related artifacts such as containers and code repositories. Anchore Enterprise starts the scanning process by generating and storing a high-fidelity SBOM that identifies all of the open source and proprietary components and their direct and transitive dependencies. Anchore uses this detailed SBOM to accurately identify vulnerabilities and security risks.

Identifying Zero-Day Vulnerabilities

When a zero-day vulnerability arises, Anchore Enterprise can instantly identify which components and applications are impacted by simply re-analyzing your stored SBOMs. You don’t need to re-scan applications or components.

Multiple Vulnerability Feeds

Anchore Enterprise uses a broad set of vulnerability data sources, including the National Vulnerability Database, GitHub Security Advisories, feeds for popular Linux distros and packages, and an Anchore-curated dataset for suppression of known false-positive vulnerability matches. See Data Feeds Overview for more information.

Precision Vulnerability Matching

Anchore Enterprise applies a best-in-class precision matching algorithm to select vulnerability data from the most accurate feed source. For example, when Anchore’s detailed SBOM data identifies that there is a specific Linux distro, such as RHEL, Anchore Enterprise will automatically use that vendor’s feed source instead of reporting every Linux vulnerability. Anchore’s precision vulnerability matching algorithm reduces false positives and false negatives, saving developer time. See the Managing False Positives section within this topic for additional ways that Anchore Enterprise reduces false positives.

Vulnerability Management and Remediation

Focusing solely on identifying vulnerability and security issues without remediation is not good enough for today’s modern DevSecOps teams. Anchore Enterprise combines the power of a rich set of SBOM metadata, reporting, and policy management capabilities to enable customers to remediate issues with the flexibility and granularity needed to mitigate disruption or slow down software production.

Managing False Positives

Anchore Enterprise provides a number of innovative capabilities to help reduce the number of false positives and optimize the signal-to-noise ratio. It starts with accurate component identification through Anchore’s high-fidelity SBOMs and a precision vulnerability matching algorithm for fewer false positives. In addition, allowlists and temporary allowlists provide for exceptions, reducing ongoing alerts. Lastly, Anchore Enterprise enables users to correct the false positives to avoid being raised in subsequent scans. “Corrections” help increase results in accuracy over time and lower signal-to-noise ratio.

Flexible Policy Enforcement

Anchore Enterprise enables users to define automated rules that indicate which vulnerabilities violate their organizations’ policies. For example, an organization may raise policy violations for vulnerabilities scored as Critical or High that have a fix available. These policy violations can generate alerts and notifications or be used to stop builds in the CI/CD pipeline or prevent code from moving to production. Policy enforcement can be applied at any stage in the development process, from the selection and usage of open source components through the build, staging, and deployment process. See Policy for more information.

Streamlined Remediation

Anchore Enterprise provides capabilities to automatically alert developers of issues through their existing tools, such as Jira or Slack. It also lets users define actionable remediation workflows with automated remediation recommendations.

Open Source Security, Dependencies, and Licenses

Anchore Enterprise gives users the ability to identify and track open source dependencies that are incorporated at any stage in the software lifecycle. Anchore Enterprise scans source code repositories, CI/CD pipelines, and container registries to generate SBOMs that include both direct and transitive dependencies and to identify exactly where those dependencies are found.

Anchore Enterprise also identifies the relevant open source licenses and enables users to ensure that the open source components used along with their dependencies are compliant with all license requirements. License policies are customizable and can be tailored to fit each organization’s open source requirements.

Compliance with Standards

Anchore Enterprise provides a flexible policy engine that enables you to identify and alert on the most important vulnerabilities and security issues, and to meet internal or external compliance requirements. You can leverage out-of-the-box policy packs for common compliance standards, or create custom policies for your organization’s needs. You can define rules against the most complete set of metadata and apply policies at the most granular level with different rules for different applications, teams, and pipelines.

Anchore offers out-of-the-box policy packs to help you comply with NIST and CIS standards that are foundational for such industry-specific standards as HIPAA and PCI DSS.

Flexible Policy Enforcement

Policies are flexible and provide both notifications and gates to prevent code from moving along the development pipeline or into production based on your criteria. You can define policy rules for image and file metadata, file contents, licenses, and vulnerability scoring. And you can define unique rules for each team, for each application, and for each pipeline.

Automated Rules

Anchore Enterprise enables users to define automated rules that indicate which vulnerabilities violate their organization’s policies. For example, an organization may raise policy violations for vulnerabilities scored as Critical or High that have a fix available. These policy violations can generate alerts and notifications or be used to stop builds in the CI/CD pipeline or prevent code from moving to production. You can apply policy enforcement at any stage in the development process from the selection and usage of open source components through the build, staging, and deployment process.

Anchore Enterprise Policy Packs

Anchore Enterprise provides the following out-of-the-box policy bundles that automate checks for common compliance programs, standards, and laws including CIS, NIST, FedRAMP, CISA vulnerabilities, , and more. Policy Packs comprise bundled policies and are flexible so that you can modify them to meet your organization’s requirements.

  • FedRAMP

    The FedRAMP Policy validates whether container images scanned by Anchore Enterprise are compliant with the FedRAMP Vulnerability Scanning Requirements and also validates them against FedRAMP controls specified in NIST 800-53 Rev 5 and NIST 800-190.

  • DISA Image Creation and Deployment Guide

    The DISA Image Creation and Deployment Guide Policy provides security and compliance checks that align with specific NIST 800-53 and NIST 800-190 security controls and requirements as described in the Department of Defense (DoD) Container Image Creation and Deployment Guide.

  • DoD Iron Bank

    The DoD Iron Bank Policy validates images against DoD security and compliance requirements in alignment with U.S. Air Force security standards at Platform One and Iron Bank.

  • CIS

    The CIS Policy validates a subset of security and compliance checks against container image best practices and NIST 800-53 and NIST 800-190 security controls and requirements. To expand CIS security controls, you can customize the policies in accordance with CIS Benchmarks.

  • NIST

    The NIST policy validates content against NIST 800-53 and NIST 800-190.

1.2 - Anchore Enterprise Architecture

Anchore Enterprise is a distributed application that runs on supported container runtime platforms. The product is deployed as a series of containers that provide services whose functions are made available through APIs. These APIs can be consumed directly via included clients such as the AnchoreCTL and GUI or via out-of-the-box integrations for use with container registries, Kubernetes, and CI/CD tooling. Alternatively, users can interact directly with the APIs for custom integrations.

Services

The following sections describe the services within Anchore Enterprise.

APIs

Enterprise Public API

The Enterprise API is the primary RESTful API for Anchore Enterprise and is the one used by AnchoreCTL, the GUI and integrations. This API is used to upload data such as a software bill of materials (SBOM) and container images, execute functions and retrieve information (SBOM data, vulnerabilities and the results of policy evaluations). The API also exposes the user and account management functions. See Using the Anchore API for more information.

Stateful Services

Data Syncer

The Anchore Enterprise Data Syncer downloads and normalizes data from external sources and makes it available to the Anchore Enterprise. It communicates with the hosted Anchore Data Service to fetch the following datasets:

  • vulnerability_db: Contains vulnerability data from the NVD, Red Hat, and other sources.
  • vulnerability_annotations: Contains vulnerability annotations from CISA KEV (Known Exploited Vulnerabilities) that are used to provide additional context to vulnerabilities.
  • malware_signatures: Contains malware signatures from ClamAV that are used to detect malware in images.
  • epss_db: Contains exploit prediction scores and percentiles for vulnerabilities.
Policy Engine

The policy engine is responsible for loading an SBOM and associated content and then evaluating it against a set of policy rules. This resulting policy evaluation is then passed to the Catalog service. The policies are stored as a series of JSON documents that can be uploaded and downloaded via the Enterprise API or edited via the GUI.

Catalog

The catalog is the primary state manager of the system and provides data access to system services from the backend database service (PostgreSQL).

SimpleQueue

The SimpleQueue is another PostgreSQL-backed queue service that the other components use for task execution, notifications, and other asynchronous operations.

Workers

Analyzers

An Analyzer is the component that generates an SBOM from an artifact or source repo (which may be passed through the API or pulled from a registry), performs the vulnerability and policy analysis, and stores the SBOM and the results of the analysis in the organization’s Anchore Enterprise repository. Alternatively the AnchoreCTL client can be used to locally scan and generate the SBOM and then pass the SBOM to the analyzers via the API. Each Analyzer can process one container image or source repo at a time. You can increase the number of Analyzers (within the limits of your contract) to increase the throughput for the system in order to process multiple artifacts concurrently.

Clients

AnchoreCTL

AnchoreCTL is a Go-based command line client for Anchore Enterprise. It can be used to send commands to the backend API as part of manual or automated operations. It can also be used to generate SBOM content that is local to the machine it is run on.

AnchoreCTL is the recommended client for developing custom integrations with CI/CD systems or other situations where local processing of content needs to be performed before being passed to the Enterprise API.

Anchore Enterprise GUI

The Anchore Enterprise GUI is a front end to the API services and simplifies many of the processes associated with creating policies, viewing SBOMs, creating and running reports, and configuring the overall system (notifications, users, registry credentials, and more).

Integrations

External systems can integrate with Anchore Enterprise using software entities that exercise select parts of the Anchore Enterprise API. Such software entities can be executable agents or plugins. We use the generic term integration instance to refer to such a deployed software entity. Enterprise can receive health reports from integration instances to track and monitor their status (assuming the integration instance implements that).

Kubernetes Admission Controller

The Kubernetes Admission Controller is a plugin that can be used to intercept a container image as it is about to be deployed to Kubernetes. The image is passed to Anchore Enterprise which analyzes it to determine if it meets the organization’s policy rules. The policy evaluation result can then allow, warn, or block the deployment.

Kubernetes and ECS Runtime Inventory

anchore-k8s-invetory and anchore-ecs-inventory are agents that creates an ongoing inventory of the images that are running in a Kubernetes or ECS cluster. The agentes run inside the runtime environment (under a service account) and connects to the local runtime API. The agents poll the API on an interval to retrieve a list of container images that are currently in use.

Multi-Tenancy

Accounts

Accounts in Anchore Enterprise are a boundary that separates data, policies, notifications, and users into a distinct domain. An account can be mapped to a team, project, or application that needs its own set of policies applied to a specific set of content. Users may be granted access to multiple accounts.

Users

Users are local to an account and can have roles as defined by RBAC. Usernames must be unique across the entire deployment to enable API-based authentication requests. Certain users can be configured such that they have the ability to switch context between accounts, akin to a super user account.

1.3 - Concepts

How does Anchore Enterprise work?

Anchore takes a data-driven approach to analysis and policy enforcement. The system has the following discrete phases for each image analyzed:

  1. Fetch the image content and extract it, but never execute it.
  2. Analyze the image by running a set of Anchore analyzers over the image content to extract and classify as much metadata as possible.
  3. Save the resulting analysis in the database for future use and audit.
  4. Evaluate policies against the analysis result, including vulnerability matches on the artifacts discovered in the image.
  5. Update to the latest external data used for policy evaluation and vulnerability matches (feed sync), and automatically update image analysis results against any new data found upstream.
  6. Notify users of changes to policy evaluations and vulnerability matches.

Repeat step 5 and 6 on intervals to ensure you have the latest external data and updated image evaluations.

alt text

The primary interface is a RESTful API that provides mechanisms to request analysis, policy evaluation, and monitoring of images in registries as well as query for image contents and analysis results. Anchore Enterprise also provides a command-line interface (CLI), and its own container.

The following modes provide different ways to use Anchore within the API:

  • Interactive Mode - Use the APIs to explicitly request an image analysis, or get a policy evaluation and content reports. The system only performs operations when specifically requested by a user.
  • Watch Mode - Use the APIs to configure Anchore Enterprise to poll specific registries and repositories/tags to watch for new images, and then automatically pull and evaluate them. The API sends notifications when a state changes for a tag’s vulnerability or policy evaluation.

Anchore can be easily integrated into most environments and processes using these two modes of operation.

Next Steps

Now let’s get familiar with Images in Anchore.

1.3.1 - Analyzing Images

Once an image is submitted to Anchore Enterprise for analysis, Anchore Enterprise will attempt to retrieve metadata about the image from the Docker registry and, if successful, will download the image and queue the image for analysis.

Anchore Enterprise can run one or more analyzer services to scale out processing of images. The next available analyzer worker will process the image.

alt text

During analysis, every package, software library, and file are inspected, and this data is stored in the Anchore database.

Anchore Enterprise includes a number of analyzer modules that extract data from the image including:

  • Image metadata
  • Image layers
  • Operating System Package Data (RPM, DEB, APKG)
  • File Data
  • Ruby Gems
  • Node.JS NPMs
  • Java Archives
  • Python Packages
  • .NET NuGet Packages
  • File content

Once a tag has been added to Anchore Enterprise, the repository will be monitored for updates to that tag. See Image and Tag Watchers for more information about images and tags.

Any updated images will be downloaded and analyzed.

Next Steps

Now let’s get familiar with the Image Analysis Process.

1.3.1.1 - Base and Parent Images

A Docker or OCI image is composed of layers. Some of the layers are created during a build process such as following instructions in a Dockerfile. But many of the layers will come from previously built images. These images likely come from a container team at your organization, or maybe build directly on images from a Linux distribution vendor. In some cases this chain could be many images deep as various teams add standard software or configuration.

Docker uses the FROM clause to denote an image to use as a basis for building a new image. The image provided in this clause is known by Docker as the Parent Image, but is commonly referred to as the Base Image. This chain of images built from other images using the FROM clause is known as an Image’s ancestry.

Note Docker defines Base Image as an image with a FROM SCRATCH clause. Anchore does NOT follow this definition, instead following the more common usage where Base Image refers to the image that a given image was built from.

Example Ancestry

The following is an example of an image with multiple ancestors

A base distro image, for example debian:10

FROM scratch
...

A framework container image from that debian image, for example a node.js image let’s call mynode:latest

FROM debian:10

# Install nodejs

The application image itself built from the framework container, let’s call it myapp:v1

FROM mynode:latest
COPY ./app /
...

These dockerfiles generate the following ancestry graph:

graph
debian:10-->|parent of|mynode:latest
mynode:latest-->|parent of|myapp:v1

Where debian:10 is the parent of mynode:latest which is the parent of myapp:v1

Anchore compares the layer digests of images that it knows about to determine an images ancestry. This ensures that the exact image used to build a new image is identified.

Given our above example, we may see the following layers for each image. Note that each subsequent image is a superset of the previous images layers

block-beta
columns 5
  block:debian_block
    columns 1
    debian["debian:10"]
    style debian stroke-width:0px
    debian_layer1["sha256:abc"]
    debian_layer2["sha256:def"]
    space
    space
  end
  space
  block:mynode_block
    columns 1
    mynode_name["mynode:latest"]
    style mynode_name stroke-width:0px
    mynode_layer1["sha256:abc"]
    mynode_layer2["sha256:def"]
    mynode_layer3["sha256:123"]
    space
  end
  space
    block:myapp_block
    columns 1
    myapp_name["myapp:v1"]
    style myapp_name stroke-width:0px
    myapp_layer1["sha256:abc"]
    myapp_layer2["sha256:def"]
    myapp_layer3["sha256:123"]
    myapp_layer4["sha256:456"]
  end

mynode_name -- "FROM" --> debian
debian_layer1 --> mynode_layer1
mynode_layer1 --> debian_layer1
debian_layer2 --> mynode_layer2
mynode_layer2 --> debian_layer2

myapp_name -- "FROM" --> mynode_name
mynode_layer1 --> myapp_layer1
myapp_layer1 --> mynode_layer1
mynode_layer2 --> myapp_layer2
myapp_layer2 --> mynode_layer2
mynode_layer3 --> myapp_layer3
myapp_layer3 --> mynode_layer3

Ancestry within Anchore

Anchore automatically calculates an image’s ancestry as images are scanned. This works by comparing the layer digests of each image to calculate the entire chain of images that produced a given image. The entire ancestry can be retrieved for an image through the GET /v2/images/{image_digest}/ancestors API. See the API docs for more information on the specifics.

Base Image

It is often useful to compare an image with another image in its ancestry. For example to filter out vulnerabilities that are present in a “golden image” from a platform team and only showing vulnerabilities introduced by the application being built on the “golden image”.

Controlling the Base Image

Users can control which ancestor is chosen as the base image by marking the desired image(s) with a special annotation anchore.user/marked_base_image. The annotation should be set to a value of true, otherwise it will be ignored. This annotation is currently restricted to users in the “admin” account.

If an image with this annotation should no longer be considered a Base Image than you must update the annotation to false, as it is not currently possible to remove annotations.

Usage of this annotation when calculating the Base Image can be disabled by setting services.policy_engine.enable_user_base_image to false in the configuration file (see deployment specific docs for configuring this setting).

Anchorectl Example

You can add an image with this annotation using AnchoreCTL with the following:

anchorectl image add anchore/test_images:ancestor-base -w --annotation "anchore.user/marked_base_image=true"

If an image should no longer be considered a Base Image you can update the annotation with:

anchorectl image add anchore/test_images:ancestor-base --annotation "anchore.user/marked_base_image=false"

Calculating the Base Image

Anchore will automatically calculate the Base Image from an image’s ancestry using the closest ancestor. From our example above, the Base Image for myapp:v1 is mynode:latest.

The first ancestor with this annotation will be used as the Base Image, if no ancestors have this annotation than it will fall back to using the closest ancestor (the Parent Image).

The rules for determining the Base Image are encoded in this diagram

graph
start([start])-->image
image[image]
image-->first_parent_exists
first_parent_exists{Does this image have a parent?}
first_parent_exists-->|No|no_base_image
first_parent_exists-->|yes|first_parent_image
first_parent_image[Parent Image]
first_parent_image-->config
config{User Base Annotations Enabled in configuration?}
config-->|No|base_image
config-->|yes|check_parent




check_parent{Parent has anchore.user/marked_base_image: true annotation}
check_parent-->|No|parent_exists
parent_exists{Does the parent image have a parent?}
parent_exists-->|Yes|parent_image
parent_image[/Move to next Parent Image/]
parent_image-->check_parent
parent_exists-->|No|no_base_image
check_parent-->|Yes|base_image

base_image([Found Base Image])
no_base_image([No Base Image Exists])

Using the Base Image

The Policy evaluation and Vuln Scan APIs have an optional base_digest parameter that is used to provide comparison data between two images. These APIs can be used in conjunction with the ancestry API to perform comparisons to the Base Image so that application developers can focus on results in their direct control. As of Enterprise v5.7.0, a special value auto can also be specified for this parameter to have the system automatically determine which image to use in the comparison based on the above rules.

To read more about the base comparison features, jump to

In addition to these user facing APIs, a few parts of the system utilize the Ancestry information.

  • The Ancestry Policy Gate uses the Base Image rules to determine which image to evaluate against
  • Reporting uses the Base Image to calculate the “Inherited From Base” column for vulnerabilities
  • The UI displays the Base Image and uses it for Policy Evaluations and Vulnerability Scans

Additional notes about ancestor calculations

An image B is only a child of Image A if All of the layers of Image A are present in Image B.

For example, mypython and mynode represent two different language runtime images built from a debian base. These two images are not ancestors of each other because the layers in mypython:latest are not a superset of the layers in mynode:latest, nor the other way around.

block-beta
columns 3
  block:mypython_block
    columns 1
    mypython_name["mypython:latest"]
    style mypython_name stroke-width:0px
    mypython_layer1["sha256:abc"]
    mypython_layer2["sha256:def"]
    mypython_layer3["sha256:456"]
  end
  space
  block:mynode_block
    columns 1
    mynode_name["mynode:latest"]
    style mynode_name stroke-width:0px
    mynode_layer1["sha256:abc"]
    mynode_layer2["sha256:def"]
    mynode_layer3["sha256:123"]
  end


mypython_layer1 --> mynode_layer1
mynode_layer1 --> mypython_layer1
mypython_layer2 --> mynode_layer2
mynode_layer2 --> mypython_layer2
mypython_layer3 -- "X" --> mynode_layer3

But these images could be based on a 3rd image which is made up of the 2 layers that they share. If Anchore knows about this 3rd image it would show up as an ancestor for both mypython and mynode. The Anchore UI would identify the debian:10 image as an ancestor of the mypython:latest and the mynode:latest images. But we do not currently expose child ancestors, so we would not show the children of the debian:10 image.

block-beta
columns 5
  block:mypython_block
    columns 1
    mypython_name["mypython:latest"]
    style mypython_name stroke-width:0px
    mypython_layer1["sha256:abc"]
    mypython_layer2["sha256:def"]
    mypython_layer4["sha256:456"]
  end
  space
  block:debian_block
    columns 1
    debian["debian:10"]
    style debian stroke-width:0px
    debian_layer1["sha256:abc"]
    debian_layer2["sha256:def"]
    space
  end
  space
  block:mynode_block
    columns 1
    mynode_name["mynode:latest"]
    style mynode_name stroke-width:0px
    mynode_layer1["sha256:abc"]
    mynode_layer2["sha256:def"]
    mynode_layer3["sha256:123"]
  end

mynode_name -- "FROM" --> debian
debian_layer1 --> mynode_layer1
mynode_layer1 --> debian_layer1
debian_layer2 --> mynode_layer2
mynode_layer2 --> debian_layer2

mypython_name -- "FROM" --> debian
debian_layer1 --> mypython_layer1
mypython_layer1 --> debian_layer1
debian_layer2 --> mypython_layer2
mypython_layer2 --> debian_layer2

1.3.1.1.1 - Compare Base Image Policy Checks

This feature provides a mechanism to compare the policy checks for an image with those of a Base Image. You can read more about Base Image and how to find them here. Base comparison uses the same policy and tag to evaluate both images to ensure a fair comparison. The API yields a response similar to the policy checks API with an additional element within each triggered gate check to indicate whether the result is inherited from the Base Image.

Usage

This functionality is currently available via the Enterprise UI and API.

API

Refer to API Access section for the API specification. The policy check API (GET /v2/images/{imageDigest}/check) has an optional base_digest query parameter that can be used to specify an image to compare policy findings to. When this query parameter is provided each of the finding’s inherited_from_base field will be filled in with true or false to denote if the finding is present in the provided image. If no image is provided than the inherited_from_base field will be null to indicate no comparison was performed.

Example request using curl to retrieve policy check for an image digest sha256:xyz and tag p/q:r and compare the results to a Base Image digest sha256:abc

curl -X GET -u {username:password} "http://{servername:port}/v2/images/sha256:xyz/check?tag=p/q:r&base_digest=sha256:abc"

Example output:

{
    "image_digest": "sha256:xyz",
    "evaluated_tag": "p/q:r",
    "evaluations": [
        {
            "comparison_image_digest": "sha256:abc",
            "details": {
                "findings": [
                    {
                        "trigger_id": "41cb7cdf04850e33a11f80c42bf660b3",
                        "gate": "dockerfile",
                        "trigger": "instruction",
                        "message": "Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check",
                        "action": "warn",
                        "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
                        "recommendation": "",
                        "rule_id": "312d9e41-1c05-4e2f-ad89-b7d34b0855bb",
                        "allowlisted": false,
                        "allowlist_match": null,
                        "inherited_from_base": true
                    },
                    {
                        "trigger_id": "CVE-2019-5435+curl",
                        "gate": "vulnerabilities",
                        "trigger": "package",
                        "message": "MEDIUM Vulnerability found in os package type (APKG) - curl (CVE-2019-5435 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5435)",
                        "action": "warn",
                        "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
                        "recommendation": "",
                        "rule_id": "6b5c14e7-a6f7-48cc-99d2-959273a2c6fa",
                        "allowlisted": false,
                        "allowlist_match": null,
                        "inherited_from_base": false
                    }
                ]
                ...
            }
            ...
        }
        ...
    ]
}
  • Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check is triggered by both images and hence inherited_from_base is marked true
  • MEDIUM Vulnerability found in os package type (APKG) - curl (CVE-2019-5435 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5435) is not triggered by the Base Image and therefore the value of inherited_from_base is false

1.3.1.1.2 - Compare Base Image Security Vulnerabilities

This feature provides a mechanism to compare the security vulnerabilities detected in an image with those of a Base Image. You can read more about base images and how to find them here. The API yields a response similar to vulnerabilities API with an additional element within each result to indicate whether the result is inherited from the Base Image.

Usage

This functionality is currently available via the Enterprise UI and API. Watch this space as we add base comparison support in other tools.

API

Refer to API Access section for the API specification. The vulnerabilities API GET /v2/images/{image_digest}/vuln/{vtype} has a base_digest query parameter that can be used to specify an image to compare vulnerability findings to. When this query parameter is provided an additional inherited_from_base field is provided for each vulnerability.

Example request using curl to retrieve security vulnerabilities for an image digest sha:xyz and compare the results to a Base Image digest sha256:abc

curl -X GET -u {username:password} "http://{servername:port}/v2/images/sha256:xyz/vuln/all?base_digest=sha256:abc"

Example output:

{
  "base_digest": "sha256:abc",
  "image_digest": "sha256:xyz",
  "vulnerability_type": "all",
  "vulnerabilities": [
    {
      "feed": "vulnerabilities",
      "feed_group": "alpine:3.12",
      "fix": "7.62.0-r0",
      "inherited_from_base": true,
      "nvd_data": [
        {
          "cvss_v2": {
            "base_score": 6.4,
            "exploitability_score": 10.0,
            "impact_score": 4.9
          },
          "cvss_v3": {
            "base_score": 9.1,
            "exploitability_score": 3.9,
            "impact_score": 5.2
          },
          "id": "CVE-2018-16842"
        }
      ],
      "package": "libcurl-7.61.1-r3",
      "package_cpe": "None",
      "package_cpe23": "None",
      "package_name": "libcurl",
      "package_path": "pkgdb",
      "package_type": "APKG",
      "package_version": "7.61.1-r3",
      "severity": "Medium",
      "url": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16842",
      "vendor_data": [],
      "vuln": "CVE-2018-16842"
    },
    {
      "feed": "vulnerabilities",
      "feed_group": "alpine:3.12",
      "fix": "2.4.46-r0",
      "inherited_from_base": false,
      "nvd_data": [
        {
          "cvss_v2": {
            "base_score": 5.0,
            "exploitability_score": 10.0,
            "impact_score": 2.9
          },
          "cvss_v3": {
            "base_score": 7.5,
            "exploitability_score": 3.9,
            "impact_score": 3.6
          },
          "id": "CVE-2020-9490"
        }
      ],
      "package": "apache2-2.4.43-r0",
      "package_cpe": "None",
      "package_cpe23": "None",
      "package_name": "apache2",
      "package_path": "pkgdb",
      "package_type": "APKG",
      "package_version": "2.4.43-r0",
      "severity": "Medium",
      "url": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9490",
      "vendor_data": [],
      "vuln": "CVE-2020-9490"
    }
  ]
}

Note that inherited_from_base is a new element in the API response added to support base comparison. The assigned boolean value indicates whether the exact vulnerability is present in the Base Image. In the above example

  • CVE-2018-16842 affects libcurl-7.61.1-r3 package in both images, hence inherited_from_base is marked true
  • CVE-2019-5482 affects apache2-2.4.43-r0 package does not affect the Base Image and therefore inherited_from_base is set to false

1.3.1.2 - Image Analysis Process

There are two types of image analysis:

  1. Centralized Analysis
  2. Distributed Analysis

Image analysis is performed as a distinct, asynchronous, and scheduled task driven by queues that analyzer workers periodically poll.

Image analysis_status states:

stateDiagram
    [*] --> not_analyzed: analysis queued
    not_analyzed --> analyzing: analyzer starts processing
    analyzing --> analyzed: analysis completed successfully
    analyzing --> analysis_failed: analysis fails
    analyzing --> not_analyzed: re-queue by timeout or analyzer shutdown
    analysis_failed --> not_analyzed: re-queued by user request
    analyzed --> not_analyzed: re-queued for re-processing by user request

Centralized Analysis

The analysis process is composed of several steps and utilizes several system components. The basic flow of that task as shown in the following example:

Centralized analysis high level summary:

sequenceDiagram
    participant A as AnchoreCTL
    participant R as Registry
    participant E as Anchore Deployment
    A->>E: Request Image Analysis
    E->>R: Get Image content
    R-->>E: Image Content
    E->>E: Analyze Image Content (Generate SBOM and secret scans etc) and store results
    E->>E: Scan sbom for vulns and evaluate compliance

The analyzers operate in a task loop for analysis tasks as shown below:

alt text

Adding more detail, the API call trace between services looks similar to the following example flow:

alt text

Distributed Analysis

In distributed analysis, the analysis of image content takes place outside the Anchore deployment and the result is imported into the deployment. The image has the same state machine transitions, but the ‘analyzing’ processing of an imported analysis is the processing of the import data (vuln scanning, policy checks, etc) to prepare the data for internal use, but does not download or touch any image content.

High level example with AnchoreCTL:

sequenceDiagram
    participant A as AnchoreCTL
    participant R as Registry/Docker Daemon
    participant E as Anchore Deployment
    A->>R: Get Image content
    R-->>A: Image Content
    A->>A: Analyze Image Content (Generate SBOM and secret scans etc)
    A->>E: Import SBOM, secret search, fs metadata
    E->>E: Scan sbom for vulns and evaluate compliance

Next Steps

Now let’s get familiar with Watching Images and Tags with Anchore.

1.3.1.2.1 - Malware Scanning

Overview

Anchore Enterprise provides malware scanning with the use ClamAV. ClamAV is an open-source antivirus solution designed to detect malicious code embedded in container images.

When enabled, malware scanning occurs with Centralized Analysis, when the image content itself is available. Any findings are available via:

  • API /images/{image_digest}/content/malware)
  • AnchoreCTL anchorectl image content <image_digest> -t malware
  • UI Image SBOM Tab

The Malware Policy Gate also provides compliance rules around any findings.

Please Note: Files in an image which are greater than 2GB will be skipped due to a limitation in ClamAV. Any skipped file will be identified with a Malware Signature as ANCHORE.FILE_SKIPPED.MAX_FILE_SIZE_EXCEEDED.

Signature DB Updates

Each analyzer service will run a malware signature update before analyzing each image. This does add some latency to the overall analysis time but ensures the signatures are as up-to-date as possible for each image analyzed. The update behavior can be disabled if you prefer to manage the freshness of the db via another route, such as a shared filesystem mounted to all analyzer nodes that is updated on a schedule. See the configuration section for details on disabling the db update.

The status of the db update is present in each scan output for each image.

Scan Results

The malware content type is a list of scan results. Each result is the run of a malware scanner, by default clamav.

The list of files found to contain malware signature matches is in the findings property of each scan result. An empty array value indicates no matches found.

The metadata property provides generic metadata specific to the scanner. For the ClamAV implementation, this includes the version data about the signature db used and if the db update was enabled during the scan. If the db update is disabled, then the db_version property of the metadata will not have values since the only way to get the version metadata is during a db update.

{
    "content": [
        {
            "findings": [
                {
                    "path": "/somebadfile",
                    "signature": "Unix.Trojan.MSShellcode-40"
                },
                {
                    "path": "/somedir/somepath/otherbadfile",
                    "signature": "Unix.Trojan.MSShellcode-40"
                }
            ],
            "metadata": {
                "db_update_enabled": true,
                "db_version": {
                    "bytecode": "331",
                    "daily": "25890",
                    "main": "59"
                }
            },
            "scanner": "clamav"
        }
    ],
    "content_type": "malware",
    "imageDigest": "sha256:0eb874fcad5414762a2ca5b2496db5291aad7d3b737700d05e45af43bad3ce4d"
}

1.3.1.3 - Image and Tag Watchers

Overview

Anchore has the capability to monitor external Docker Registries for updates to tags as well as new tags. It also watches for updates to vulnerability databases and package metadata (the “Feeds”).

Repository Updates: New Tags

The process for monitoring updates to repositories, the addition of new tag names, is done on a duty cycle and performed by the Catalog component(s). The scheduling and tasks are driven by queues provided by the SimpleQueue service.

  1. Periodically, controlled by the cycle_timers configuration in the config.yaml of the catalog, a process is triggered to list all the Repository Subscription records in the system and for each record, add a task to a specific queue.
  2. Periodically, also controlled by the cycle_timers config, a process is triggered to pick up tasks off that queue and process repository scan tasks. Each task looks approximately like the following:

alt text

The output of this process is new tag_update subscription records, which are subsequently processed by the Tag Update handlers as described below. You can view the tag_update subscriptions using AnchoreCTL:

anchorectl subscription list -t tag_update

Tag Updates: New Images

To detect updates to tags, mapping of a new image digest to a tag name, Anchore periodically checks the registry and downloads the tag’s image manifest to compare the computed digests. This is done on a duty cycle for every tag_update subscription record. Therefore, the more subscribed tags exist in the system, the higher the load on the system to check for updates and detect changes. This processing, like repository update monitoring, is performed by the Catalog component(s).

The process, the duty-cycle of which is configured in the cycle_timers section of the catalog config.yaml is described below:

alt text

As new updates are discovered, they are automatically submitted to the analyzers, via the image analysis internal queue, for processing.

The overall process and interaction of these duty cycles works like:

alt text

Next Steps

Now let’s get familiar with Policy in Anchore.

1.3.1.4 - Analysis Archive

Anchore Enterprise is a data intensive system. Storage consumption grows with the number of images analyzed, which leaves the following options for storage management:

  1. Over-provisioning storage significantly
  2. Increasing capacity over time, resulting in downtime (e.g. stop system, grow the db volume, restart)
  3. Manually deleting image analysis to free space as needed

In most cases, option 1 only works for a while, which then requires using 2 or 3. Managing storage provisioned for a postgres DB is somewhat complex and may require significant data copies to new volumes to grow capacity over time.

To help mitigate the storage growth of the db itself, Anchore Enterprise already provides an object storage subsystem that enables using external object stores like S3 to offload the unstructured data storage needs to systems that are more growth tolerant and flexible. This lowers the db overhead but does not fundamentally address the issue of unbounded growth in a busy system.

The Analysis Archive extends the object store even further by providing a system-managed way to move an image analysis and all of its related data (policy evaluations, tags, annotations, etc) and moving it to a location outside of the main set of images such that it consumes much less storage in the database when using an object store, perserves the last state of the image, and supports moving it back into the main image set if it is needed in the future without requiring that the image itself be reanalzyed–restoring from the archive does not require the actual docker image to exist at all.

To facilitate this, the system can be thought of as two sets of analysis with different capabilities and properties:

Analysis Data Sets

Working Set Images

The working set is the set of images in the ‘analyzed’ state in the system. These images are stored in the database, optionally with some data in an external object store. Specifically:

  • State = ‘analyzed’
  • The set of images available from the /images api routes
  • Available for policy evaluation, content queries, and vulnerability updates

Archive Set Images

The archive set of images are image analyses that reside almost entirely in the object store, which can be configured to be a different location than the object store used for the working set, with minimal metadata in the anchore DB necessary to track and restore the analysis back into the working set in the future. An archived image analysis preserves all the annotations, tags, and metadata of the original analysis as well as all existing policy evaluation histories, but are not updated with new vulnerabilities during feed syncs and are not available for new policy evaluations or content queries without first being restored into the working set.

  • Not listed in /images API routes
  • Cannot have policy evaluations executed
  • No vulnerability updates automatically (must be restored to working set first)
  • Available from the /archives/images API routes
  • Point-in-time snapshot of the analysis, policy evaluation, and vulnerability state of an image
  • Independently configurable storage location (analysis_archive property in the services.catalog property of config.yaml)
  • Small db storage consumption (if using external object store, only a few small records, bytes)
  • Able to use different type of storage for cost effectiveness
  • Can be restored to the working set at any time to restore full query and policy capabilities
  • The archive object store is not used for any API operations other than the restore process

An image analysis, identified by the digest of the image, may exist in both sets at the same time, they are not mutually exclusive, however the archive is not automatically updated and must be deleted an re-archived to capture updated state from the working set image if desired.

Benefits of the Archive

Because archived image analyses are stored in a distinct object store and tracked with their own metadata in the db, the images in that set will not impact the performance of working set image operations such as API operations, feed syncs, or notification handling. This helps keep the system responsive and performant in cases where the set of images that you’re interested in is much smaller than the set of images in the system, but you don’t want to delete the analysis because it has value for audit or historical reasons.

  1. Leverage cheaper and more scalable cloud-based storage solutions (e.g. S3 IA class)
  2. Keep the working set small to manage capacity and api performance
  3. Ensure the working set is images you actively need to monitor without losing old data by sending it to the archive

Automatic Archiving

To help facilitate data management automatically, Anchore supports rules to define which data to archive and when based on a few qualities of the image analysis itself. These rules are evaluated periodically by the system.

Anchore supports both account-scoped rules, editable by users in the account, and global system rules, editable only by the system admin account users. All users can view system global rules such that they can understand what will affect their images but they cannot update or delete the rules.

The process of automatic rule evaluation:

  1. The catalog component periodically (daily by default, but configurable) will run through each rule in the system and identify image digests should be archived according to either account-local rules or system global rules.

  2. Each matching image analysis is added to the archive.

  3. Each successfully added analysis is deleted from the working set.

  4. For each digest migrated, a system event log entry is created, indicating that the image digest was moved to the archive.

Archive Rules

The rules that match images are provide 3 selectors:

  1. Analysis timestamp - the age of the analysis itself, as expressed in days
  2. Source metadata (registry, repo, tag) - the values of the registry, repo, and tag values
  3. Tag history depth – the number of images mapped to a tag ordered by detected_at timestamp (the time at which the system observed the mapping of a tag to a specific image manifest digest)

Rule scope:

  • global - these rules will be evaluated against all images and all tags in the system, regardless of the owning account. (system_global = true)
  • account - these rules are only evaluated against the images and tags of the account which owns the rule. (system_global = false)

Example Rule:

{
    "analysis_age_days": 10,
    "created_at": "2019-03-30T22:23:50Z",
    "last_updated": "2019-03-30T22:23:50Z",
    "rule_id": "67b5f8bfde31497a9a67424cf80edf24",
    "selector": {
        "registry": "*",
        "repository": "*",
        "tag": "*"
    },
    "system_global": true,
    "tag_versions_newer": 10,
    "transition": "archive",
    "exclude": {
        "expiration_days": -1,
        "selector": {
            "registry": "docker.io",
            "repository": "alpine",
            "tag": "latest"
        }
    },
    "max_images_per_account": 1000
}
  • selector: a json object defining a set of filters on registry, repository, and tag that this rule will apply to.
    • Each entry supports wildcards. e.g. {"registry": "*", "repository": "library/*", "tag": "latest"}
  • tag_versions_newer: the minimum number of tag->digest mappings with newer timestamps that must be preset for this rule to match an image tag.
  • analysis_age_days: the minimum age of the analysis to match, as indicated by the ‘analyzed_at’ timestamp on the image record.
  • transition: the operation to perform, one of the following
    • archive: works on the working set and transitions to archive, while deleting the source analysis upon successful archive creation. Specifically: the analysis will “move” to the archive and no longer be in the working set.
    • delete: works on the archive set and deletes the archived record on a match
  • exclude: a json object defining a set of filters on registry, repository, and tag, that will exclude a subset of image(s) from the selector defined above.
    • expiration_days: This allows the exclusion filter to expire. When set to -1, the exclusion filter does not expire
  • max_images_per_account: This setting may only be applied on a single “system_global” rule, and controls the maximum number of images allows in the anchore deployment (that are not archived). If this number is exceeded, anchore will transition (according to the transition field value) the oldest images exceeding this maximum count.
  • last_seen_in_days: This allows images to be excluded from the archive, if there is a corresponding runtime inventory image, where last_seen_in_days is within the specified number of days.
    • This field will exclude any images last seen in X number of days regardless of whether it’s in the exclude selector.

Rule conflicts and application:

For an image to be transitioned by a rule it must:

  • Match at least 1 rule for each of its tag entries (either in working set if transition is archive or those in the archive set, if a delete transition)
  • All rule matches must be of the same scope, global and account rules cannot interact

Put another way, if any tag record for an image analysis is not defined to be transitioned, then the analysis record is not transitioned.

Usage

Image analysis can be archived explicitly via the API (and CLI) as well as restored. Alternatively, the API and CLI can manage the rules that control automatic transitions. For more information see the following:

Archiving an Image Analysis

See: Archiving an Image

Restoring an Image Analysis

See: Restoring an Image

Managing Archive Rules

See: Working with Archive Rules

1.3.2 - Policy

Once an image has been analyzed and its content has been discovered, categorized, and processed, the results can be evaluated against a user-defined set of checks to give a final pass/fail recommendation for an image. Anchore Enterprise policies are how users describe which checks to perform on what images and how the results should be interpreted.

A policy is made up from a set of rules that are used to perform an evaluation a container image. The rules can define checks against an image for things such as:

  • security vulnerabilities
  • package allowlists and denylists
  • configuration file contents
  • presence of credentials in image
  • image manifest changes
  • exposed ports

These checks are defined as Gates that contain Triggers that perform specific checks and emit match results and these define the things that the system can automatically evaluate and return a decision about.

For a full listing of gate, triggers, and their parameters see: Anchore Policy Checks

These policies can be applied globally or customized for specific images or categories of applications.

A policy evaluation can return one of two results:

PASSED indicating that image complies with your policy alt text

FAILED indicating that the image is out of compliance with your policy.

alt text

Next Steps

Read more on Policies and Evaluation

1.3.2.1 - Policies and Evaluation

Introduction

Policies are the unit of policy definition and evaluation in Anchore Enterprise. A user may have multiple policies, but for a policy evaluation, the user must specify a policy to be evaluated or default to the policy currently marked ‘active’. See Policies Overview for more detail on manipulating and configuring policies.

Components of a Policy

A policy is a single JSON document, composed of several parts:

  • Policy Gates - The named sets of rules and actions.
  • Allowlists - Named sets of rule exclusions to override a match in a policy rule.
  • Mappings - Ordered rules that determine which policies and allowlists should be applied to a specific image at evaluation time.
  • Allowlisted Images - Overrides for specific images to statically set the final result to a pass regardless of the policy evaluation result.
  • Blocklisted Images - Overrides for specific images to statically set the final result to a fail regardless of the policy evaluation result.

Example JSON for an empty policy, showing the sections and top-level elements:

{
  "id": "default0",
  "version": "2",
  "name": "My Default policy",
  "comment": "My system's default policy",
  "allowlisted_images": [],
  "denylisted_images": [],
  "mappings": [],
  "allowlists": [],
  "rule_sets": []
}

Policies

A policy contains zero or more rule sets. The rule sets in a policy define the checks to make against an image and the actions to recommend if the checks find a match.

Example of a single rule set JSON object, one entry in the rule_set array of the larger policy document:

{
  "name": "DefaultPolicy", 
  "version": "2",
  "comment": "Policy for basic checks", 
  "id": "ba6daa06-da3b-46d3-9e22-f01f07b0489a", 
  "rules": [
    {
      "action": "STOP", 
      "gate": "vulnerabilities", 
      "id": "80569900-d6b3-4391-b2a0-bf34cf6d813d", 
      "params": [
        { "name": "package_type", "value": "all" }, 
        { "name": "severity_comparison", "value": ">=" }, 
        { "name": "severity", "value": "medium" }
      ], 
      "trigger": "package"
    }
  ]
}

The above example defines a stop action to be produced for all package vulnerabilities found in an image that are severity medium or higher.

For information on how Rule Sets work and are evaluated, see: Rule Sets

Allowlists

An allowlist is a set of exclusion rules for trigger matches found during policy evaluation. An allowlist defines a specific gate and trigger_id (part of the output of a policy rule evaluation) that should have it’s action recommendation statically set to go. When a policy rule result is allowlisted, it is still present in the output of the policy evaluation, but it’s action is set to go and it is indicated that there was an allowlist match.

Allowlists are useful for things like:

  • Ignoring CVE matches that are known to be false-positives
  • Ignoring CVE matches on specific packages (perhaps if they are known to be custom patched)

Example of a simple allowlist as a JSON object from a policy:

{
  "id": "allowlist1",
  "name": "Simple Allowlist",
  "version": "2",
  "items": [
    { "id": "item1", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-10000+libssl" },
    { "id": "item2", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-10001+*" }
  ]
}

For more information, see Allowlists

Mappings

Mappings are named rules that define which rule sets and allowlists to evaluate for a given image. The list of mappings is evaluated in order, so the ordering of the list matters because the first rule that matches an input image will be used and all others ignored.

Example of a simple mapping rule set:

[
  { 
    "name": "DockerHub",
    "registry": "docker.io",
    "repository": "library/postgres",
    "image": { "type": "tag", "value": "latest" },
    "rule_set_ids": [ "policy1", "policy2" ],
    "allowlist_ids": [ "allowlist1", "allowlist2" ]
  },
  {
    "name": "default", 
    "registry": "*",
    "repository": "*",
    "image": { "type": "tag", "value": "*" },
    "rule_set_ids": [ "policy1" ],
    "allowlist_ids": [ "allowlist1" ]
  }
]

For more information about mappings see Mappings

Allowlisted Images

Allowlisted images are images, defined by registry, repository, and tag/digest/imageId, that will always result in a pass status for policy evaluation unless the image is also matched in the denylisted images section.

Example image allowlist section:

{ 
  "name": "AllowlistDebianStable",
  "registry": "docker.io",
  "repository": "library/debian",
  "image": { "type": "tag", "value": "stable" }
}

Denylisted Images

Denylisted images are images, defined by registry, repository, and tag/digest/imageId, that will always result in a policy policy evaluation status of fail. It is important to note that denylisting an image does not short-circuit the mapping evaluation or policy evaluations, so the full set of trigger matches will still be visible in the policy evaluation result.

Denylisted image matches override any allowlisted image matches (e.g. a tag matches a rule in both lists will always be blocklisted/fail).

Example image denylist section:

{ 
  "name": "BlAocklistDebianUnstable",
  "registry": "docker.io",
  "repository": "library/debian",
  "image": { "type": "tag", "value": "unstable" }
}

A complete policy example with all sections containing data:

{
  "id": "default0",
  "version": "2",
  "name": "My Default policy",
  "comment": "My system's default policy",
  "allowlisted_images": [
    {
      "name": "AllowlistDebianStable",
      "registry": "docker.io",
      "repository": "library/debian",
      "image": { "type": "tag", "value": "stable" }
    }
  ],
  "denylisted_images": [
    {
      "name": "DenylistDebianUnstable",
      "registry": "docker.io",
      "repository": "library/debian",
      "image": { "type": "tag", "value": "unstable" }
    }
  ],
  "mappings": [
    {
      "name": "DockerHub", 
      "registry": "docker.io",
      "repository": "library/postgres",
      "image": { "type": "tag", "value": "latest" },
      "rule_set_ids": [ "policy1", "policy2" ],
      "allowlist_ids": [ "allowlist1", "allowlist2" ]
    },
    {
      "name": "default", 
      "registry": "*",
      "repository": "*",
      "image": { "type": "tag", "value": "*" },
      "rule_set_ids": [ "policy1" ],
      "allowlist_ids": [ "allowlist1" ]
    }
  ],
  "allowlists": [
    {
      "id": "allowlist1",
      "name": "Simple Allowlist",
      "version": "2",
      "items": [
        { "id": "item1", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-10000+libssl" },
        { "id": "item2", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-10001+*" }
      ]
    },
    {
      "id": "allowlist2",
      "name": "Simple Allowlist",
      "version": "2",
      "items": [
        { "id": "item1", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-1111+*" }
      ]
    }
  ],
  "rule_sets": [
    {
      "name": "DefaultPolicy",
      "version": "2",
      "comment": "Policy for basic checks",
      "id": "policy1",
      "rules": [
        {
          "action": "STOP",
          "gate": "vulnerabilities",
          "trigger": "package",
          "id": "rule1",
          "params": [
            { "name": "package_type", "value": "all" },
            { "name": "severity_comparison", "value": ">=" },
            { "name": "severity", "value": "medium" }
          ]
        }
      ]
    },
    {
      "name": "DBPolicy",
      "version": "1_0",
      "comment": "Policy for basic checks on a db",
      "id": "policy2",
      "rules": [
        {
          "action": "STOP",
          "gate": "vulnerabilities",
          "trigger": "package",
          "id": "rule1",
          "params": [
            { "name": "package_type", "value": "all" },
            { "name": "severity_comparison", "value": ">=" },
            { "name": "severity", "value": "low" }
          ]
        }
      ]
    }
  ]
}

Policy Evaluation

A policy evaluation results in a status of pass or fail and that result based on the evaluation:

  1. The mapping section to determine which policies and allowlists to select for evaluation against the given image and tag
  2. The output of the policies’ triggers and applied allowlists.
  3. Denylisted images section
  4. Allowlisted images section

A pass status means the image evaluated against the policy and only go or warn actions resulted from the policy evaluation and allowlisted evaluations, or the image was allowlisted. A fail status means the image evaluated against the policy and at least one stop action resulted from the policy evaluation and allowlist evaluation, or the image was denylisted.

The flow chart for policy evaluation:

alt text

Next Steps

Read more about the Rule Sets component of a policy.

1.3.2.2 - Rule Sets

Overview

A rule set is a named set of rules, represented as a JSON object within a Policy. A rule set is made up of rules that define a specific check to perform and a resulting action.

A Rule Set is made up of:

  • ID: a unique id for the rule set within the policy
  • Name: a human readable name to give the policy (may contain spaces etc)
  • A list of rules to define what to evaluate and the action to recommend on any matches for the rule

A simple example of a rule_set JSON object (found within a larger policy object):

{
  "name": "DefaultPolicy",
  "version": "2",
  "comment": "Policy for basic checks",
  "id": "policy1",
  "rules": [
      {
        "action": "STOP",
        "gate": "vulnerabilities",
        "id": "rule1",
        "params": [
          { "name": "package_type", "value": "all" },
          { "name": "severity_comparison", "value": ">=" },
          { "name": "severity", "value": "medium" }
        ],
        "trigger": "package",
        "recommendation": "Upgrade the package",
      }
  ]
}

The above example defines a stop action to be produced for all package vulnerabilities found in an image that are severity medium or higher.

Policy evaluation is the execution of all defined triggers in the rule set against the image analysis result and feed data and results in a set of output trigger matches, each of which contains the defined action from the rule definition. The final recommendation value for the policy evaluation is called the final action, and is computed from the set of output matches: stop, go, or warn.

alt text

Policy Rules

Rules define the behavior of the policy at evaluation time. Each rule defines:

  • Gate - example: dockerfile
  • Trigger - example: exposed_ports
  • Parameters - parameters specific to the gate/trigger to customize its match behavior
  • Action - the action to emit if a trigger evaluation finds a match. One of stop, go, warn. The only semantics of these values are in the aggregation behavior for the policy result.

Gates

A Gate is a logical grouping of trigger definitions and provides a broader context for the execution of triggers against image analysis data. You can think of gates as the “things to be checked”, while the triggers provide the “which check to run” context. Gates do not have parameters themselves, but namespace the set of triggers to ensure there are no name conflicts.

Examples of gates:

  • vulnerabilities
  • packages
  • npms
  • files

For a complete listing see: Anchore Policy Checks

Triggers

Triggers define a specific condition to check within the context of a gate, optionally with one or more input parameters. A trigger is logically a piece of code that executes with the image analysis content and feed data as inputs and performs a specific check. A trigger emits matches for each instance of the condition for which it checks in the image. Thus, a single gate/trigger policy rule may result in many matches in final policy result, often with different match specifics (e.g. package names, cves, or filenames…).

Trigger parameters are passed as name, value pairs in the rule JSON:

{
  "action": "WARN",
  "parameters": [
    {  "name": "param1", "value": "value1" },
    {  "name": "param2", "value": "value2" },
    {  "name": "paramN", "value": "valueN" }
  ],
  "gate": "vulnerabilities",
  "trigger": "packages",
}

For a complete listing of gates, triggers, and the parameters, see: Anchore Policy Gates

Policy Evaluation

  • All rules in a selected rule_set are evaluated, no short-circuits
  • Rules who’s triggers and parameters find a match in the image analysis data, will “fire” resulting in a record of the match and parameters. A trigger may fire many times during an evaluation (e.g. many cves found).
  • Each firing of a trigger generates a trigger_id for that match
  • Rules may be executed in any order, and are executed in isolation (e.g. conflicting rules are allowed, it’s up to the user to ensure that policies make sense)

A policy evaluation will always contain information about the policy and image that was evaluated as well as the Final Action. The evaluation can optionally include additional detail about the specific findings from each rule in the evaluated rule_set as well as suggested remediation steps.

Policy Evaluation Findings

When extra detail is requested as part of the policy evaluation, the following data is provided for each finding produced by the rules in the evaluated rule_set.

  • trigger_id - An ID for the specific rule match that can be used to allowlist a finding
  • gate - The name of the gate that generated this finding
  • trigger - The name of the trigger within the Gate that generated this finding
  • message - A human readable description of the finding
  • action - One of go, warn, stop based on the action defined in the rule that generated this finding
  • policy_id - The ID for the rule_set that this rule is a part of
  • recommendation - An optional recommendation provided as part of the rule that generated this finding
  • rule_id - The ID of the rule that generated this finding
  • allowlisted - Indicates if this match was present in the applied allowlist
  • allowlist_match - Only provided if allowlisted is true, contains a JSON object with details about a allowlist match (allowlist id, name and allowlist rule id)
  • inherited_from_base - An optional field that indicates if this policy finding was present in a provided comparison image

Excerpt from a policy evaluation, showing just the policy evaluation output:

...json
"findings": [
  {
    "trigger_id": "CVE-2008-3134+imagemagick-6.q16",
    "gate": "package",
    "trigger": "vulnerabilities",
    "message": "MEDIUM Vulnerability found in os package type (dpkg) - imagemagick-6.q16 (CVE-2008-3134 - https://security-tracker.debian.org/tracker/CVE-2008-3134)",
    "action": "go",
    "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
    "recommendation": "Upgrade the package",
    "rule_id": "rule1",
    "allowlisted": false,
    "allowlist_match": null,
    "inherited_from_base": false
  },
  {
    "trigger_id": "CVE-2008-3134+libmagickwand-6.q16-2",
    "gate": "package",
    "trigger": "vulnerabilities",
    "message": "MEDIUM Vulnerability found in os package type (dpkg) - libmagickwand-6.q16-2 (CVE-2008-3134 - https://security-tracker.debian.org/tracker/CVE-2008-3134)",
    "action": "go",
    "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
    "recommendation": "Upgrade the package",
    "rule_id": "rule1",
    "allowlisted": false,
    "allowlist_match": null,
    "inherited_from_base": false
  }
]

Final Action

The final action of a policy evaluation is the policy’s recommendation based on the aggregation of all trigger evaluations defined in the policy and the resulting matches emitted.

The final action of a policy evaluation will be:

  • stop - if there are any triggers that match with this action, the policy evaluation will result in an overall stop.
  • warn - if there are any triggers that match with this action, and no triggers that match with stop, then the policy evaluation will result in warn.
  • go - if there are no triggers that match with either stop or warn, then the policy evaluation is result is a go. go actions have no impact on the evaluation result, but are useful for recording the results of specific checks on an image in the audit trail of policy evaluations over time

The policy findings are one part of the broader policy evaluation which includes things like image allowlists and denylists and makes a final policy evaluation status determination based on the combination of several component executions. See policies for more information on that process.

Next Steps

Read more about the Mappings component of a policy.

1.3.3 - Remediation

After Anchore analyzes images, discovers their contents and matches vulnerabilities, it can suggest possible actions that can be taken.

These actions range from adding a Healthcheck to your Dockerfile to upgrading a package version.

Since the solutions for resolving vulnerabilities can vary and may require several different forms of remediation and intervention, Anchore provides the capability to plan out your course of action.

Action Plans

Action plans group up the resolutions that may be taken to address the vulnerabilities or issues found in a particular image and provide a way for you to take action.

Currently, we support one type of Action Plan, which can be used to notify an existing endpoint configuration of those resolutions. This is a great way to facilitate communication across teams when vulnerabilities need to be addressed.

Here’s an example JSON that describes an Action Plan for notifications:

{
    "type": "notification",
    "image_tag": "docker.io/alpine:latest",
    "image_digest": "sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a",
    "bundle_id": "anchore_default_bundle",
    "resolutions": [
        {
            "trigger_ids": ["CVE-2020-11-09-fake"],
            "content": "This is a Resolution for the CVE",
        }
    ],
    "subject": "Actions required for image: alpine:latest",
    "message": "These are some issues Anchore found in alpine:latest, and how to resolve them",
    "endpoint": "smtp",
    "configuration_id": "cda118f9ec63ddefb4a173a2b2a03"
}

Parts:

  • type: The type of action plan being submitted (currently, only notification supported)
  • image_tag: The full image tag of the image requiring action
  • image_tag: The image digest of the image requiring action
  • bundle_id: the id of the policy bundle that discovered the vulnerabilities
  • resolutions: A list composed of the remediations and corresponding trigger IDs
  • subject: The subject line for the action plan notification
  • message: The body of the message for the action plan notification
  • endpoint: The type of notification endpoint the action plan will be sent to
  • configuration_id: The uuid of the notification configuration for the above endpoint

1.4 - Anchore Enterprise Data Feeds

Anchore Data Service Overview

Anchore operates a hosted service called the Anchore Data Service that serves pre-built datasets to customer Enterprise deployments.

Anchore Data Service manages four datasets:

  • Vulnerability Database (grypedb) - This dataset contains vulnerability data from the following sources:
    • Alpine
    • Amazon Linux
    • Anchore Exclusions (CVEs that Anchore has excluded from the feed)
    • Chain Guard
    • Debian
    • Github
    • Mariner
    • MSRC
    • NVD (Including the Anchore Enhancements)
    • Oracle
    • RHEL
    • SLES
    • Ubuntu
    • Wolfi
  • ClamAV Malware Database - This dataset contains malware signatures that are used to detect malware in images.
  • CISA Known Exploited Vulnerabilities (KEV) - This dataset contains vulnerability annotations that are used to provide additional context to vulnerabilities.
  • Exploit Prediction Scoring System (EPSS) - This dataset contains exploit prediction scores for vulnerabilities.

These datasets are refreshed by pipelines that run every 6 hours.

Data Syncer Service Design

Anchore Enterprise includes a service, called the Data Syncer Service, that is responsible for syncing the datasets from the Anchore Data Service and making them available for use by the rest of Anchore Enterprise.

Data Service Flow

The following two FQDNs need to be allowlisted in your network to allow the Data Syncer Service to communicate with the Anchore Data Service:

https://data.anchore-enterprise.com
https://s3.us-west-2.amazonaws.com/enterprise-data-service.production.anchore.io

Authentication for this service is provided by your Anchore Enterprise license. No additional credentials are required.

To learn more, please review the Data Syncer Service Configuration doc.

2 - Deploying Anchore Enterprise

Anchore Enterprise and its components are delivered as Docker container images which can be deployed as co-located, fully distributed, or anything in-between. Anchore Enterprise can run on a single host or be deployed in a scale out pattern for increased analysis throughput.

To get up and running, jump to the following guides of your choosing:

2.1 - Requirements

This section details the general requirements for running Anchore Enterprise. For a conceptual understanding of Anchore Enterprise, please see the Overview topic prior to deploying the software.

Runtime

Anchore Enterprise requires a Docker compatible runtime (version 1.12 or higher). Deployment is supported on:

  • Docker Compose (for demo or proof-of-concept and small deployments)
  • Any Kubernetes Certified Service Provider (KSCP) as certified by the Cloud Native Computing Foundation (CNCF) via Helm.
  • Any Kubernetes Certified Distribution as certified by the Cloud Native Computing Foundation (CNCF) via Helm.
  • Amazon Elastic Container Service (ECS) via Helm.

Resourcing

Use-case and usage patterns will determine the resource requirements for Anchore Enterprise. When deploying via Helm, requests and limits are set in the values.yaml file. When deploying via docker compose, add reservations and limits into your docker compose file. The following recommendations can get you started:

  • Requests specify the desired resource amounts for the container, while limits specify the maximum resource amounts the container is allowed. We have found that setting the request and limit to the same value provides the best quality of service (QoS) from Kubernetes. We do not recommend setting limits for CPU.

  • We do not recommend setting less than 1 CPU unit for any containers. Less than this could result in unexpected behaviour and should only be used in testing scenarios.

  • For the catalog, policy and postgresql service containers, we recommend a minimum of 2 CPU units.

  • We do not recommend setting memory units to less than 8G except for API and UI services, where we recommend starting at 4G. Less than these values could result in OOM errors or containers restarting unexpectedly.

If you intend on using K8s, the default values.yaml found in the Anchore Enterprise Helm Chart provides some resourcing recommendations to get you started.

Database

The only service dependency strictly required by Anchore Enterprise is a PostgreSQL database (13.0 or higher) that all services connect to, but do not use for communication beyond some very simple service registration/lookup processes. The database is centralized simply for ease of management and operation. For more information, go to Anchore Enterprise Architecture. Anchore Enterprise uses this database to provide persistent storage for image, policy and analysis data.

A PostgreSQL database ships with the default deployment mechanisms for Anchore Enterprise. This is often referred to as the Anchore-managed database. This can be run in a container, as configured in the example Docker Compose file and default Helm values file.

The PostgreSQL database requirement can also be provided as an external service to Anchore Enterprise. PostgreSQL compatible databases, such as Amazon RDS for PostgreSQL, can be used for highly-scalable cloud deployments.

Network

An Anchore Enterprise deployment requires the following three categories of network access:

  • Service Access
    • Connectivity between Anchore Enterprise services, including access to an external database.
  • Registry Access
    • Network connectivity, including DNS resolution, to the registries from which Anchore Enterprise needs to download images.
  • Anchore Data Service Access
    • Anchore Enterprise requires access to the datasets in order to perform analysis and vulnerability matching. See Anchore Enterprise Data Feeds for more information.

Security

Anchore Enterprise is deployed as source repositories or container images that can be run manually using Docker Compose, Kubernetes or any other supported container platform.

By default, Anchore Enterprise does not require any special permissions. It can be run as an unprivileged container with no access to the underlying Docker host.

Note: Anchore Enterprise can be configured to pull images through the Docker Socket. However, this configuration is not recommended, as it grants the Anchore Enterprise container added privileges, and may incur a performance impact on the Docker Host.

Storage

Anchore Enterprise can be configured to depend on other storage for various artifacts. For full details on storage configuration, see Storage Configuration.

  • Configuration volumes: this volume is used to provide persistent storage to the container from which it will read its configuration files, and optionally - certificates. Requirement: Less than 1MB.
  • [Optional] Scratch space: this temporary storage volume is recommended but not required. During the analysis of images, Anchore Enterprise downloads and extracts all of the layers required for an image. These layers are extracted and analyzed, after which, the layers and extracted data are deleted. If a temporary storage is not configured, then the container’s ephemeral storage will be used to store temporary files. However, performance is likely be improved by using a dedicated volume.
  • [Optional] Layer cache: another temporary storage volume may also be used for image-layer caching to speed up analysis. This caches image layers for re-use by analyzers when generating an SBOM / analyzing an image.
  • [Optional] Object storage Anchore Enterprise stores documents containing archives of image analysis data and policies as JSON documents. By default, these documents are stored within the PostgreSQL database. However, Anchore Enterprise can be configured to store archive documents in a filesystem (volume), S3 Object store, or Swift Object Store. Requirement: Number of images x 10MB (estimated).

Enterprise UI

The Anchore Enterprise UI module interfaces with Anchore API using the external API endpoint. The UI requires access to the Anchore database where it creates its own namespace for persistent configuration storage. Additionally, a Redis database deployed and managed by Anchore Enterprise through the supported deployment mechanisms is used to store session information.

  • Network
    • Ingress
      • The Anchore UI module publishes a web UI service by default on port 3000, however, this port can be remapped.
    • Egress
      • The Anchore UI module requires access to three network services at the minimum:
        • External API endpoint (typically port 8228)
        • Redis Database (typically port 6379)
        • PostgreSQL Database (typically port 5432)
  • Redis Service
    • Version 7 or higher

Optimizing your Deployment

Optimizing your Anchore deployment on Kubernetes, involves various strategies to enhance performance, reliability, and scalability. Here are some key tips:

  • Ensure that your Analyzer, API, Catalog, and Policy service containers have adequate CPU and memory resources. See the recommended values here: https://github.com/anchore/support-tools/blob/main/customer-onboarding/enterprise-chart-values.yaml
  • Integrate with monitoring tools like Prometheus and Grafana to monitor key metrics like CPU, memory usage, analysis times, and feed sync status. You can also Set up alerts for critical thresholds. Follow our guide on Prometheus and Grafana setup Monitoring guides
  • For large deployments, it is good practice to Schedule regular vacuuming, indexing, and performance tuning to keep the database running efficiently.
  • Layer caching in Docker can significantly speed up the image build process by reusing layers that haven’t changed, reducing build times and improving efficiency. Follow our guide on Layer Caching setup

Next Steps

If you feel you have a solid grasp of the requirements for deploying Anchore Enterprise, we recommend following one of our installation guides.

2.2 - Deploy using Docker Compose

In this topic, you’ll learn how to use Docker Compose to get up and running with a stand-alone Anchore Enterprise deployment.

This document is intended to serve as a quickstart guide. Before moving further with Anchore Enterprise, it is highly recommended to read the Overview sections to gain a deeper understanding of fundamentals, concepts, and proper usage.

Before You Start

The following instructions assume you are using a system running Docker v1.12 or higher, and a version of Docker Compose that supports at least v2 of the docker compose configuration format.

  • A stand-alone deployment requires at least 16GB of RAM, and enough disk space available to support the largest container images or source repositories that you intend to analyze. It is recommended to consider three times the largest source repository or container image size. For small testing, like basic Linux distro images or database images, between 20GB and 40GB of disk space should be sufficient.
  • To access Anchore Enterprise, you need a valid license.yaml file that has been issued to you by Anchore. If you do not have a license yet, visit the Anchore Contact page to request one.

Getting Started

Follow the steps below to get up and running!

Step 1: Check access to images

You’ll need authenticated access to the anchore/enterprise and anchore/enterprise-ui repositories on DockerHub. Anchore Customer Success will provide a Dockerhub PAT (Personal Access Token) for access to images. Login with your Docker PAT to push and pull images from Docker Hub:

# docker login -u <your_dockerhub_pat_user> -p <your_dockerhub_pat>

Step 2: Configure & run

Download the Docker Compose File into a working directory where you have also placed the license.yaml file you got from Anchore.

# curl https://docs.anchore.com/current/docs/deployment/docker_compose/docker-compose.yaml > docker-compose.yaml
# cp <path/to/your/license.yaml> ./license.yaml

Edit the compose file to set all instances of ANCHORE_ADMIN_PASSWORD to a strong password of your choice.

- ANCHORE_ADMIN_PASSWORD=yourstrongpassword

Then start your environment from your working directory:

# docker compose up -d

Step 3: Install AnchoreCTL

Next, we’ll install the lightweight Anchore Enterprise client tool, quickly test using the version operation, and set up a few environment variables to allow it to interact with your quickstart deployment using the admin password you defined in the previous step.

# curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b /usr/local/bin v5.14.0

# ./anchorectl version
Application:        anchorectl
Version:            5.14.0
SyftVersion:        v0.97.1
BuildDate:          2023-11-21T22:09:54Z
GitCommit:          f7604438b45f7161c11145999897d4ae3efcb0c8
GitDescription:     v5.14.0
Platform:           linux/amd64
GoVersion:          go1.21.1
Compiler:           gc

# export ANCHORECTL_URL="http://localhost:8228"
# export ANCHORECTL_USERNAME="admin"
# export ANCHORECTL_PASSWORD="yourstrongpassword"

NOTE: for this quickstart, we’re installing the tool in your local directory ./ and will be using environment variables throughout. To more permanently install and configure anchorectl to remove the need for setting environment variables and putting the tool in a globally accessible path, see Installing AnchoreCTL.

Step 4: Verify service availability

After a few minutes (depending on system speed) Anchore Enterprise and Anchore UI services should be up and running, ready to use. You can verify the containers are running with docker compose, as shown in the following example.

# docker compose ps
             Name                           Command                  State               Ports         
-------------------------------------------------------------------------------------------------------
anchorequickstart_analyzer_1          /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchorequickstart_anchore-db_1        docker-entrypoint.sh postgres    Up             5432/tcp              
anchorequickstart_api_1               /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8228->8228/tcp
anchorequickstart_catalog_1           /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp           
anchorequickstart_data-syncer_1       /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8778->8228/tcp  
anchorequickstart_notifications_1     /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8668->8228/tcp
anchorequickstart_policy-engine_1     /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchorequickstart_queue_1             /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp    
anchorequickstart_reports_1           /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8558->8228/tcp
anchorequickstart_reports_worker_1    /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:55427->8228/tcp
anchorequickstart_ui-redis_1          docker-entrypoint.sh redis ...   Up             6379/tcp              
anchorequickstart_ui_1                /docker-entrypoint.sh node ...   Up             0.0.0.0:3000->3000/tcp

You can then run a command to get the status of the Anchore Enterprise services:


# ./anchorectl system status
 ✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 5140       │ 5.14.0       │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 5140       │ 5.14.0       │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 5140       │ 5.14.0       │
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 5140       │ 5.14.0       │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 5140       │ 5.14.0       │
│ data_syncer     │ anchore-quickstart │ http://data-syncer:8228     │ true │ available      | 5140       │ 5.14.0       │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 5140       │ 5.14.0       │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 5140       │ 5.14.0       │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 5140       │ 5.14.0       │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

Note: The first time you run Anchore Enterprise, vulnerability data will sync to the system in a few minutes. For the best experience, wait until the core vulnerability data feeds have completed before proceeding.
You can check the status of your feed sync using AnchoreCTL:

# ./anchorectl feed list    
 ✔ List feed                                                                                                                                                                                                                                                            
┌────────────────────────────────────────────┬────────────────────┬─────────┬──────────────────────┬──────────────┐
│ FEED                                       │ GROUP              │ ENABLED │ LAST UPDATED         │ RECORD COUNT │
├────────────────────────────────────────────┼────────────────────┼─────────┼──────────────────────┼──────────────┤
│ ClamAV Malware Database                    │ clamav_db          │ true    │ 2024-09-30T18:06:05Z │ 1            │
│ Vulnerabilities                            │ github:composer    │ true    │ 2024-09-30T18:12:03Z │ 4040         │
│ Vulnerabilities                            │ github:dart        │ true    │ 2024-09-30T18:12:03Z │ 8            │
│ Vulnerabilities                            │ github:gem         │ true    │ 2024-09-30T18:12:03Z │ 817          │
│ Vulnerabilities                            │ github:go          │ true    │ 2024-09-30T18:12:03Z │ 1879         │
│ Vulnerabilities                            │ github:java        │ true    │ 2024-09-30T18:12:03Z │ 5060         │
│ Vulnerabilities                            │ github:npm         │ true    │ 2024-09-30T18:12:03Z │ 15619        │
│ Vulnerabilities                            │ github:nuget       │ true    │ 2024-09-30T18:12:03Z │ 624          │
│ Vulnerabilities                            │ github:python      │ true    │ 2024-09-30T18:12:03Z │ 3229         │
│ Vulnerabilities                            │ github:rust        │ true    │ 2024-09-30T18:12:03Z │ 804          │
│ Vulnerabilities                            │ github:swift       │ true    │ 2024-09-30T18:12:03Z │ 32           │
│ Vulnerabilities                            │ msrc:10378         │ true    │ 2024-09-30T18:12:16Z │ 2668         │
│ Vulnerabilities                            │ msrc:10379         │ true    │ 2024-09-30T18:12:16Z │ 2645         │
│ Vulnerabilities                            │ msrc:10481         │ true    │ 2024-09-30T18:12:16Z │ 1951         │
│ Vulnerabilities                            │ msrc:10482         │ true    │ 2024-09-30T18:12:16Z │ 2028         │
│ Vulnerabilities                            │ msrc:10483         │ true    │ 2024-09-30T18:12:16Z │ 2822         │
│ Vulnerabilities                            │ msrc:10484         │ true    │ 2024-09-30T18:12:16Z │ 1934         │
│ Vulnerabilities                            │ msrc:10543         │ true    │ 2024-09-30T18:12:16Z │ 2796         │
│ Vulnerabilities                            │ msrc:10729         │ true    │ 2024-09-30T18:12:16Z │ 2908         │
│ Vulnerabilities                            │ msrc:10735         │ true    │ 2024-09-30T18:12:16Z │ 3006         │
│ Vulnerabilities                            │ msrc:10788         │ true    │ 2024-09-30T18:12:16Z │ 466          │
│ Vulnerabilities                            │ msrc:10789         │ true    │ 2024-09-30T18:12:16Z │ 437          │
│ Vulnerabilities                            │ msrc:10816         │ true    │ 2024-09-30T18:12:16Z │ 3328         │
│ Vulnerabilities                            │ msrc:10852         │ true    │ 2024-09-30T18:12:16Z │ 3043         │
│ Vulnerabilities                            │ msrc:10853         │ true    │ 2024-09-30T18:12:16Z │ 3167         │
│ Vulnerabilities                            │ msrc:10855         │ true    │ 2024-09-30T18:12:16Z │ 3300         │
│ Vulnerabilities                            │ msrc:10951         │ true    │ 2024-09-30T18:12:16Z │ 716          │
│ Vulnerabilities                            │ msrc:10952         │ true    │ 2024-09-30T18:12:16Z │ 766          │
│ Vulnerabilities                            │ msrc:11453         │ true    │ 2024-09-30T18:12:16Z │ 1240         │
│ Vulnerabilities                            │ msrc:11454         │ true    │ 2024-09-30T18:12:16Z │ 1290         │
│ Vulnerabilities                            │ msrc:11466         │ true    │ 2024-09-30T18:12:16Z │ 395          │
│ Vulnerabilities                            │ msrc:11497         │ true    │ 2024-09-30T18:12:16Z │ 1454         │
│ Vulnerabilities                            │ msrc:11498         │ true    │ 2024-09-30T18:12:16Z │ 1514         │
│ Vulnerabilities                            │ msrc:11499         │ true    │ 2024-09-30T18:12:16Z │ 981          │
│ Vulnerabilities                            │ msrc:11563         │ true    │ 2024-09-30T18:12:16Z │ 1344         │
│ Vulnerabilities                            │ msrc:11568         │ true    │ 2024-09-30T18:12:16Z │ 2993         │
│ Vulnerabilities                            │ msrc:11569         │ true    │ 2024-09-30T18:12:16Z │ 3095         │
│ Vulnerabilities                            │ msrc:11570         │ true    │ 2024-09-30T18:12:16Z │ 2900         │
│ Vulnerabilities                            │ msrc:11571         │ true    │ 2024-09-30T18:12:16Z │ 3266         │
│ Vulnerabilities                            │ msrc:11572         │ true    │ 2024-09-30T18:12:16Z │ 3238         │
│ Vulnerabilities                            │ msrc:11583         │ true    │ 2024-09-30T18:12:16Z │ 1038         │
│ Vulnerabilities                            │ msrc:11644         │ true    │ 2024-09-30T18:12:16Z │ 1054         │
│ Vulnerabilities                            │ msrc:11645         │ true    │ 2024-09-30T18:12:16Z │ 1089         │
│ Vulnerabilities                            │ msrc:11646         │ true    │ 2024-09-30T18:12:16Z │ 1055         │
│ Vulnerabilities                            │ msrc:11647         │ true    │ 2024-09-30T18:12:16Z │ 1074         │
│ Vulnerabilities                            │ msrc:11712         │ true    │ 2024-09-30T18:12:16Z │ 1442         │
│ Vulnerabilities                            │ msrc:11713         │ true    │ 2024-09-30T18:12:16Z │ 1491         │
│ Vulnerabilities                            │ msrc:11714         │ true    │ 2024-09-30T18:12:16Z │ 1447         │
│ Vulnerabilities                            │ msrc:11715         │ true    │ 2024-09-30T18:12:16Z │ 999          │
│ Vulnerabilities                            │ msrc:11766         │ true    │ 2024-09-30T18:12:16Z │ 912          │
│ Vulnerabilities                            │ msrc:11767         │ true    │ 2024-09-30T18:12:16Z │ 915          │
│ Vulnerabilities                            │ msrc:11768         │ true    │ 2024-09-30T18:12:16Z │ 940          │
│ Vulnerabilities                            │ msrc:11769         │ true    │ 2024-09-30T18:12:16Z │ 934          │
│ Vulnerabilities                            │ msrc:11800         │ true    │ 2024-09-30T18:12:16Z │ 382          │
│ Vulnerabilities                            │ msrc:11801         │ true    │ 2024-09-30T18:12:16Z │ 1277         │
│ Vulnerabilities                            │ msrc:11802         │ true    │ 2024-09-30T18:12:16Z │ 1277         │
│ Vulnerabilities                            │ msrc:11803         │ true    │ 2024-09-30T18:12:16Z │ 981          │
│ Vulnerabilities                            │ msrc:11896         │ true    │ 2024-09-30T18:12:16Z │ 792          │
│ Vulnerabilities                            │ msrc:11897         │ true    │ 2024-09-30T18:12:16Z │ 762          │
│ Vulnerabilities                            │ msrc:11898         │ true    │ 2024-09-30T18:12:16Z │ 763          │
│ Vulnerabilities                            │ msrc:11923         │ true    │ 2024-09-30T18:12:16Z │ 1733         │
│ Vulnerabilities                            │ msrc:11924         │ true    │ 2024-09-30T18:12:16Z │ 1726         │
│ Vulnerabilities                            │ msrc:11926         │ true    │ 2024-09-30T18:12:16Z │ 1536         │
│ Vulnerabilities                            │ msrc:11927         │ true    │ 2024-09-30T18:12:16Z │ 1503         │
│ Vulnerabilities                            │ msrc:11929         │ true    │ 2024-09-30T18:12:16Z │ 1433         │
│ Vulnerabilities                            │ msrc:11930         │ true    │ 2024-09-30T18:12:16Z │ 1429         │
│ Vulnerabilities                            │ msrc:11931         │ true    │ 2024-09-30T18:12:16Z │ 1474         │
│ Vulnerabilities                            │ msrc:12085         │ true    │ 2024-09-30T18:12:16Z │ 1044         │
│ Vulnerabilities                            │ msrc:12086         │ true    │ 2024-09-30T18:12:16Z │ 1053         │
│ Vulnerabilities                            │ msrc:12097         │ true    │ 2024-09-30T18:12:16Z │ 964          │
│ Vulnerabilities                            │ msrc:12098         │ true    │ 2024-09-30T18:12:16Z │ 939          │
│ Vulnerabilities                            │ msrc:12099         │ true    │ 2024-09-30T18:12:16Z │ 943          │
│ Vulnerabilities                            │ nvd                │ true    │ 2024-09-30T18:12:10Z │ 264156       │
│ Vulnerabilities                            │ alpine:3.10        │ true    │ 2024-09-30T18:11:54Z │ 2321         │
│ Vulnerabilities                            │ alpine:3.11        │ true    │ 2024-09-30T18:11:54Z │ 2659         │
│ Vulnerabilities                            │ alpine:3.12        │ true    │ 2024-09-30T18:11:54Z │ 3193         │
│ Vulnerabilities                            │ alpine:3.13        │ true    │ 2024-09-30T18:11:54Z │ 3684         │
│ Vulnerabilities                            │ alpine:3.14        │ true    │ 2024-09-30T18:11:54Z │ 4265         │
│ Vulnerabilities                            │ alpine:3.15        │ true    │ 2024-09-30T18:11:54Z │ 4815         │
│ Vulnerabilities                            │ alpine:3.16        │ true    │ 2024-09-30T18:11:54Z │ 5271         │
│ Vulnerabilities                            │ alpine:3.17        │ true    │ 2024-09-30T18:11:54Z │ 5630         │
│ Vulnerabilities                            │ alpine:3.18        │ true    │ 2024-09-30T18:11:54Z │ 6144         │
│ Vulnerabilities                            │ alpine:3.19        │ true    │ 2024-09-30T18:11:54Z │ 6348         │
│ Vulnerabilities                            │ alpine:3.2         │ true    │ 2024-09-30T18:11:54Z │ 305          │
│ Vulnerabilities                            │ alpine:3.20        │ true    │ 2024-09-30T18:11:54Z │ 6444         │
│ Vulnerabilities                            │ alpine:3.3         │ true    │ 2024-09-30T18:11:54Z │ 470          │
│ Vulnerabilities                            │ alpine:3.4         │ true    │ 2024-09-30T18:11:54Z │ 679          │
│ Vulnerabilities                            │ alpine:3.5         │ true    │ 2024-09-30T18:11:54Z │ 902          │
│ Vulnerabilities                            │ alpine:3.6         │ true    │ 2024-09-30T18:11:54Z │ 1075         │
│ Vulnerabilities                            │ alpine:3.7         │ true    │ 2024-09-30T18:11:54Z │ 1461         │
│ Vulnerabilities                            │ alpine:3.8         │ true    │ 2024-09-30T18:11:54Z │ 1671         │
│ Vulnerabilities                            │ alpine:3.9         │ true    │ 2024-09-30T18:11:54Z │ 1955         │
│ Vulnerabilities                            │ alpine:edge        │ true    │ 2024-09-30T18:11:54Z │ 6467         │
│ Vulnerabilities                            │ amzn:2             │ true    │ 2024-09-30T18:11:48Z │ 2280         │
│ Vulnerabilities                            │ amzn:2022          │ true    │ 2024-09-30T18:11:48Z │ 276          │
│ Vulnerabilities                            │ amzn:2023          │ true    │ 2024-09-30T18:11:48Z │ 736          │
│ Vulnerabilities                            │ chainguard:rolling │ true    │ 2024-09-30T18:12:02Z │ 4487         │
│ Vulnerabilities                            │ debian:10          │ true    │ 2024-09-30T18:11:55Z │ 32021        │
│ Vulnerabilities                            │ debian:11          │ true    │ 2024-09-30T18:11:55Z │ 33574        │
│ Vulnerabilities                            │ debian:12          │ true    │ 2024-09-30T18:11:55Z │ 32529        │
│ Vulnerabilities                            │ debian:13          │ true    │ 2024-09-30T18:11:55Z │ 31702        │
│ Vulnerabilities                            │ debian:7           │ true    │ 2024-09-30T18:11:55Z │ 20455        │
│ Vulnerabilities                            │ debian:8           │ true    │ 2024-09-30T18:11:55Z │ 24058        │
│ Vulnerabilities                            │ debian:9           │ true    │ 2024-09-30T18:11:55Z │ 28240        │
│ Vulnerabilities                            │ debian:unstable    │ true    │ 2024-09-30T18:11:55Z │ 35992        │
│ Vulnerabilities                            │ mariner:1.0        │ true    │ 2024-09-30T18:12:11Z │ 2092         │
│ Vulnerabilities                            │ mariner:2.0        │ true    │ 2024-09-30T18:12:11Z │ 2627         │
│ Vulnerabilities                            │ ol:5               │ true    │ 2024-09-30T18:12:01Z │ 1255         │
│ Vulnerabilities                            │ ol:6               │ true    │ 2024-09-30T18:12:01Z │ 1709         │
│ Vulnerabilities                            │ ol:7               │ true    │ 2024-09-30T18:12:01Z │ 2199         │
│ Vulnerabilities                            │ ol:8               │ true    │ 2024-09-30T18:12:01Z │ 1910         │
│ Vulnerabilities                            │ ol:9               │ true    │ 2024-09-30T18:12:01Z │ 874          │
│ Vulnerabilities                            │ rhel:5             │ true    │ 2024-09-30T18:12:06Z │ 7193         │
│ Vulnerabilities                            │ rhel:6             │ true    │ 2024-09-30T18:12:06Z │ 11129        │
│ Vulnerabilities                            │ rhel:7             │ true    │ 2024-09-30T18:12:06Z │ 11376        │
│ Vulnerabilities                            │ rhel:8             │ true    │ 2024-09-30T18:12:06Z │ 7007         │
│ Vulnerabilities                            │ rhel:9             │ true    │ 2024-09-30T18:12:06Z │ 4040         │
│ Vulnerabilities                            │ sles:11            │ true    │ 2024-09-30T18:12:19Z │ 594          │
│ Vulnerabilities                            │ sles:11.1          │ true    │ 2024-09-30T18:12:19Z │ 6125         │
│ Vulnerabilities                            │ sles:11.2          │ true    │ 2024-09-30T18:12:19Z │ 3291         │
│ Vulnerabilities                            │ sles:11.3          │ true    │ 2024-09-30T18:12:19Z │ 7081         │
│ Vulnerabilities                            │ sles:11.4          │ true    │ 2024-09-30T18:12:19Z │ 6583         │
│ Vulnerabilities                            │ sles:12            │ true    │ 2024-09-30T18:12:19Z │ 6018         │
│ Vulnerabilities                            │ sles:12.1          │ true    │ 2024-09-30T18:12:19Z │ 6205         │
│ Vulnerabilities                            │ sles:12.2          │ true    │ 2024-09-30T18:12:19Z │ 8339         │
│ Vulnerabilities                            │ sles:12.3          │ true    │ 2024-09-30T18:12:19Z │ 10396        │
│ Vulnerabilities                            │ sles:12.4          │ true    │ 2024-09-30T18:12:19Z │ 10215        │
│ Vulnerabilities                            │ sles:12.5          │ true    │ 2024-09-30T18:12:19Z │ 12444        │
│ Vulnerabilities                            │ sles:15            │ true    │ 2024-09-30T18:12:19Z │ 8737         │
│ Vulnerabilities                            │ sles:15.1          │ true    │ 2024-09-30T18:12:19Z │ 9245         │
│ Vulnerabilities                            │ sles:15.2          │ true    │ 2024-09-30T18:12:19Z │ 9573         │
│ Vulnerabilities                            │ sles:15.3          │ true    │ 2024-09-30T18:12:19Z │ 10074        │
│ Vulnerabilities                            │ sles:15.4          │ true    │ 2024-09-30T18:12:19Z │ 10438        │
│ Vulnerabilities                            │ sles:15.5          │ true    │ 2024-09-30T18:12:19Z │ 10882        │
│ Vulnerabilities                            │ sles:15.6          │ true    │ 2024-09-30T18:12:19Z │ 3778         │
│ Vulnerabilities                            │ ubuntu:12.04       │ true    │ 2024-09-30T18:12:37Z │ 14934        │
│ Vulnerabilities                            │ ubuntu:12.10       │ true    │ 2024-09-30T18:12:37Z │ 5641         │
│ Vulnerabilities                            │ ubuntu:13.04       │ true    │ 2024-09-30T18:12:37Z │ 4117         │
│ Vulnerabilities                            │ ubuntu:14.04       │ true    │ 2024-09-30T18:12:37Z │ 37919        │
│ Vulnerabilities                            │ ubuntu:14.10       │ true    │ 2024-09-30T18:12:37Z │ 4437         │
│ Vulnerabilities                            │ ubuntu:15.04       │ true    │ 2024-09-30T18:12:37Z │ 6220         │
│ Vulnerabilities                            │ ubuntu:15.10       │ true    │ 2024-09-30T18:12:37Z │ 6489         │
│ Vulnerabilities                            │ ubuntu:16.04       │ true    │ 2024-09-30T18:12:37Z │ 35057        │
│ Vulnerabilities                            │ ubuntu:16.10       │ true    │ 2024-09-30T18:12:37Z │ 8607         │
│ Vulnerabilities                            │ ubuntu:17.04       │ true    │ 2024-09-30T18:12:37Z │ 9095         │
│ Vulnerabilities                            │ ubuntu:17.10       │ true    │ 2024-09-30T18:12:37Z │ 7908         │
│ Vulnerabilities                            │ ubuntu:18.04       │ true    │ 2024-09-30T18:12:37Z │ 29591        │
│ Vulnerabilities                            │ ubuntu:18.10       │ true    │ 2024-09-30T18:12:37Z │ 8460         │
│ Vulnerabilities                            │ ubuntu:19.04       │ true    │ 2024-09-30T18:12:37Z │ 8742         │
│ Vulnerabilities                            │ ubuntu:19.10       │ true    │ 2024-09-30T18:12:37Z │ 8496         │
│ Vulnerabilities                            │ ubuntu:20.04       │ true    │ 2024-09-30T18:12:37Z │ 25673        │
│ Vulnerabilities                            │ ubuntu:20.10       │ true    │ 2024-09-30T18:12:37Z │ 10112        │
│ Vulnerabilities                            │ ubuntu:21.04       │ true    │ 2024-09-30T18:12:37Z │ 11365        │
│ Vulnerabilities                            │ ubuntu:21.10       │ true    │ 2024-09-30T18:12:37Z │ 12635        │
│ Vulnerabilities                            │ ubuntu:22.04       │ true    │ 2024-09-30T18:12:37Z │ 24135        │
│ Vulnerabilities                            │ ubuntu:22.10       │ true    │ 2024-09-30T18:12:37Z │ 14483        │
│ Vulnerabilities                            │ ubuntu:23.04       │ true    │ 2024-09-30T18:12:37Z │ 15562        │
│ Vulnerabilities                            │ ubuntu:23.10       │ true    │ 2024-09-30T18:12:37Z │ 18433        │
│ Vulnerabilities                            │ ubuntu:24.04       │ true    │ 2024-09-30T18:12:37Z │ 20148        │
│ Vulnerabilities                            │ wolfi:rolling      │ true    │ 2024-09-30T18:11:56Z │ 2906         │
│ Vulnerabilities                            │ anchore:exclusions │ true    │ 2024-09-30T18:11:56Z │ 12851        │
│ CISA KEV (Known Exploited Vulnerabilities) │ kev_db             │ true    │ 2024-09-30T18:07:21Z │ 1185         │
| Exploit Prediction Scoring System Database │ epss_db            │ true    │ 2024-11-18T18:04:12Z │ 266565       │
└────────────────────────────────────────────┴────────────────────┴─────────┴──────────────────────┴──────────────┘

As soon as you see RecordCount values set for all vulnerability groups, the system is fully populated and ready to present vulnerability results. Note that data syncs are incremental, so the next time you start up Anchore Enterprise it will be ready immediately. The AnchoreCTL includes a useful utility that will block until the feeds have completed a successful sync:


# ./anchorectl system wait
 ✔ API available                                                                                        system
 ✔ Services available                        [10 up]                                                    system
 ✔ Vulnerabilities feed ready                                                                           system

Step 5: Start using Anchore

To get started, you can add a few images to Anchore Enterprise using AnchoreCTL. Once complete, you can also run an additional AnchoreCTL command to monitor the analysis state of the added images, waiting until the images move into an ‘analyzed’ state.

# ./anchorectl image add docker.io/library/alpine:latest
 ✔ Added Image                                                                                                              docker.io/library/alpine:latest
Image:
  status:           not-analyzed (active)
  tag:              docker.io/library/alpine:latest
  digest:           sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870
  id:               9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5

# ./anchorectl image add docker.io/library/nginx:latest
 ✔ Added Image                                                                                                              docker.io/library/nginx:latest
Image:
  status:           not-analyzed (active)
  tag:              docker.io/library/nginx:latest
  digest:           sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
  id:               2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
  distro:           debian@11 (amd64)
  layers:           6

# ./anchorectl image list
 ✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────────┬────────┐
│ TAG                                                   │ DIGEST                                                                  │ ANALYSIS     │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────────┼────────┤
│ docker.io/library/alpine:latest                       │ sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870 │ analyzed     │ active │
│ docker.io/library/nginx:latest                        │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ not_analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────────┴────────┘

# ./anchorectl image add docker.io/library/nginx:latest --force --wait
 ⠏ Adding Image                                                                                                              docker.io/library/nginx:latest
 ⠼ Analyzing Image                           [analyzing]                                                                     docker.io/library/nginx:latest
...
...
 ✔ Analyzed Image                                                                                                            docker.io/library/nginx:latest
Image:
  status:           analyzed (active)
  tags:             docker.io/library/nginx:latest
  digest:           sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
  id:               2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
  distro:           debian@11 (amd64)
  layers:           6

# ./anchorectl image list
 ✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────┬────────┐
│ TAG                                                   │ DIGEST                                                                  │ ANALYSIS │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────┼────────┤
│ docker.io/library/alpine:latest                       │ sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870 │ analyzed │ active │
│ docker.io/library/nginx:latest                        │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────┴────────┘

Now that some images are in place, you can point your browser at the Anchore Enterprise UI by directing it to http://localhost:3000/.

Enter the username admin and the password you defined in the compose file to log in. These are some of the features you can use in the browser:

  • Navigate images
  • Inspect image contents
  • Perform security scans
  • Review compliance policy evaluations
  • Edit compliance policies with a complete policy editor UI
  • Manage accounts, users, and RBAC assignments
  • Review system events

Next Steps

Now that you have Anchore Enterprise running, you can begin to learn more about Anchore capabilities, architecture, concepts, and more.

Optional: Enabling Prometheus Monitoring

  1. Uncomment the following section at the bottom of the docker-compose.yaml file:

    #  # Uncomment this section to add a prometheus instance to gather metrics. This is mostly for quickstart to demonstrate prometheus metrics exported
    #  prometheus:
    #    image: docker.io/prom/prometheus:latest
    #    depends_on:
    #      - api
    #    volumes:
    #      - ./anchore-prometheus.yml:/etc/prometheus/prometheus.yml:z
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    #    ports:
    #      - "9090:9090"
    #
    
  2. For each service entry in the docker-compose.yaml, change the following to enable metrics in the API for each service

    ANCHORE_ENABLE_METRICS=false
    

    to

    ANCHORE_ENABLE_METRICS=true
    
  3. Download the example prometheus configuration into the same directory as the docker-compose.yaml file, with name anchore-prometheus.yml:

    curl https://docs.anchore.com/current/docs/deployment/anchore-prometheus.yml > anchore-prometheus.yml
    docker compose up -d
    

    Result: You should see a new container started and can access prometheus via your browser on http://localhost:9090.

Optional: Enabling Swagger UI

  1. Uncomment the following section at the bottom of the docker-compose.yaml file:

    #  # Uncomment this section to run a swagger UI service, for inspecting and interacting with the system API via a browser (http://localhost:8080 by default, change if needed in both sections below)
    #  swagger-ui-nginx:
    #    image: docker.io/nginx:latest
    #    depends_on:
    #      - api
    #      - swagger-ui
    #    ports:
    #      - "8080:8080"
    #    volumes:
    #      - ./anchore-swaggerui-nginx.conf:/etc/nginx/nginx.conf:z
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    #  swagger-ui:
    #    image: docker.io/swaggerapi/swagger-ui
    #    environment:
    #      - URL=http://localhost:8080/v2/openapi.json
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    
  2. Download the nginx configuration into the same directory as the docker-compose.yaml file, with name anchore-swaggerui-nginx.conf:

    curl https://docs.anchore.com/current/docs/deployment/anchore-swaggerui-nginx.conf > anchore-swaggerui-nginx.conf
    docker compose up -d
    

    Result: You should see a new container started, and have access Swagger UI via your browser on http://localhost:8080.

2.3 - Deploy on Kubernetes using Helm

The supported method for deploying Anchore Enterprise on Kubernetes is with Helm. The Anchore Enterprise Helm Chart includes configuration options for a full Enterprise deployment.

About the Helm Chart

Important Release Notes can be found in the README in the chart repository

The chart is split into global and service specific configurations for the core features, as well as global and services specific configurations for the optional Enterprise services.

  • The anchoreConfig section of the values file contains the application configuration for Anchore Enterprise. This includes the database connection information, credentials, and other application settings.
  • Anchore services run as a kubernetes deployment when installed with the Helm chart. Each service has its own section in the values file for making customizations and configuring the kubernetes deployment spec.

For a description of each service component see Anchore Enterprise Service Overview

Note If you are moving from the Anchore Engine Helm chart deployment to the updated Anchore Enterprise Helm chart, see here for further guidance.

Prerequisites

See the README in the chart repository for prequisities before starting the deployment.

Installing the Chart

This guide covers deploying Anchore Enterprise on a Kubernetes cluster with the default configuration. Refer to the Configuration section of the chart README for additional guidance on production deployments.

  1. Create the namespace: The steps to follow will require the namespace to have been created already.

    export NAMESPACE=anchore
    
    kubectl create namespace ${NAMESPACE}
    
  2. Create a Kubernetes Secret for License File: Generate a Kubernetes secret to store your Anchore Enterprise license file.

    export NAMESPACE=anchore
    export LICENSE_PATH="license.yaml"
    
    kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=${LICENSE_PATH} -n ${NAMESPACE}
    
  3. Create a Kubernetes Secret for DockerHub Credentials: Generate another Kubernetes secret for DockerHub credentials. These credentials should have access to private Anchore Enterprise repositories. We recommend that you create a brand new DockerHub user for these pull credentials. Contact Anchore Support to obtain access.

    export NAMESPACE=anchore
    export DOCKERHUB_PASSWORD="password"
    export DOCKERHUB_USER="username"
    export DOCKERHUB_EMAIL="[email protected]"
    
    kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=${DOCKERHUB_USER} --docker-password=${DOCKERHUB_PASSWORD} --docker-email=${DOCKERHUB_EMAIL} -n ${NAMESPACE}
    
  4. Add Chart Repository & Deploy Anchore Enterprise: Create a custom values file, named anchore_values.yaml, to override any chart parameters. Refer to the Parameters section for available options.

    Important: Default passwords are specified in the chart. It’s highly recommended to modify these before deploying.

    Note: The RELEASE variable should not contain any dots.

    export NAMESPACE=anchore
    export RELEASE=my-release
    
    helm repo add anchore https://charts.anchore.io
    helm install ${RELEASE} -n ${NAMESPACE} anchore/enterprise -f anchore_values.yaml
    

    Note: This command installs Anchore Enterprise with a chart-managed PostgreSQL database, which may not be suitable for production use. See the External Database section of the chart README for details on using an external database.

  5. Post-Installation Steps: Anchore Enterprise will take some time to initialize. After the bootstrap phase, it will begin a vulnerability feed sync. Image analysis will show zero vulnerabilities, and the UI will show errors until this sync is complete. This can take several hours based on the enabled feeds. Use the following anchorectl commands to check the system status:

    export NAMESPACE=anchore
    export RELEASE=my-release
    export ANCHORECTL_URL=http://localhost:8228
    export ANCHORECTL_PASSWORD=$(kubectl get secret "${RELEASE}-enterprise" -o jsonpath='{.data.ANCHORE_ADMIN_PASSWORD}' | base64 -d -)
    
    kubectl port-forward -n ${NAMESPACE} svc/${RELEASE}-enterprise-api 8228:8228 # port forward for anchorectl in another terminal
    anchorectl system status # anchorectl defaults to the user admin, and to the password ${ANCHORECTL_PASSWORD} automatically if set
    

    Tip: List all releases using helm list

Next Steps

Now that you have Anchore Enterprise running, you can begin to learning more about Anchore Enterprise architecture, Anchore concepts, and Anchore usage.

  • To learn more about Anchore Enterprise, go to Overview
  • To learn more about Anchore Concepts, go to Concepts

2.3.1 - Deploying Anchore Enterprise on Azure Kubernetes Service (AKS)

This document will walk you through the deployment of Anchore Enterprise in an Azure Kubernetes Service (AKS) cluster and expose it on the public Internet.

Prerequisites

  • A running AKS cluster with worker nodes launched. See AKS Documentation for more information on this setup.
  • Helm client on local host.
  • AnchoreCTL installed on a local host.

Once you have an AKS cluster up and running with worker nodes launched, you can verity via the following command.

$ kubectl get nodes

NAME                       STATUS   ROLES   AGE     VERSION
aks-nodepool1-28659018-0   Ready    agent   4m13s   v1.13.10
aks-nodepool1-28659018-1   Ready    agent   4m15s   v1.13.10
aks-nodepool1-28659018-2   Ready    agent   4m6s    v1.13.10

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:

  • Anchore Enterprise software
  • PostgreSQL (13 or higher)
  • Redis (4)

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore, this document is intended to cover the minimum required changes to successfully deploy Anchore Enterprise in AKS.

Note: For this installation, an NGINX ingress controller will be used. You can read more about Kubernetes Ingress in AKS here.

Configurations

Make the following changes below to your anchore_values.yaml

Ingress

ingress:
  enabled: true
  labels: {}
  apiPaths:
    - /v2/
  uiPath: /
  annotations:
    kubernetes.io/ingress.class: nginx

Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

Anchore API Service

# Pod configuration for the anchore api service.
api:
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Note: Changed the service type to NodePort

Anchore Enterprise UI

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

Note: Changed service type to NodePort.

Install NGINX Ingress Controller

Using Helm, install an NGINX ingress controller in your AKS cluster.

helm install stable/nginx-ingress --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux

Deploy Anchore Enterprise

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.

Create a Kubernetes secret containing your license file:

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing DockerHub credentials with access to the private Anchore Enterprise software:

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Deploy Anchore Enterprise:

helm repo add anchore https://charts.anchore.io
helm install anchore anchore/enterprise -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl get pods

NAME                                                              READY   STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m
mangy-serval-nginx-ingress-controller-788dd98c8b-jv2wg            1/1     Running   0          21m
mangy-serval-nginx-ingress-default-backend-8686cd585b-4m2bt       1/1     Running   0          21m

We can see that NGINX ingress controller has been installed as well from the previous step. You can view the services by running the following command:

$ kubectl get services | grep ingress

mangy-serval-nginx-ingress-controller                LoadBalancer   10.0.30.174    40.114.26.147   80:31176/TCP,443:30895/TCP                     22m
mangy-serval-nginx-ingress-default-backend           ClusterIP      10.0.243.221   <none>          80/TCP                                         22m

Note: The above output shows us that IP address of the NGINX ingress controller is 40.114.26.147. Going to this address in the browser will take us to the Anchore login page.

login

Anchore System

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://40.114.26.147/v2/ ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with AnchoreCTL:

ANCHORECTL_URL=http://40.114.26.147/v2/ ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Vulnerability Management section of our documentation for more information.

2.3.2 - Deploying Anchore Enterprise on Amazon EKS

Get an understanding of the deployment of Anchore Enterprise on an Amazon EKS cluster and expose it on the public Internet.

Note when using AWS consider utilizing Amazon RDS for a managed database service.

Prerequisites

  • A running Amazon EKS cluster with worker nodes launched. See EKS Documentation for more information on this setup.
  • Helm client installed on local host.
  • AnchoreCTL installed on local host.

Once you have an EKS cluster up and running with worker nodes launched, you can verify it using the following command.

$ kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
ip-192-168-2-164.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-35-43.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-55-228.ec2.internal   Ready    <none>   10m   v1.14.6-eks-5047ed

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:

  • Anchore Enterprise software
  • PostgreSQL (13 or higher)
  • Redis (4)

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore. The following is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on Amazon EKS.

Note: For this installation, an ALB ingress controller will be used. You can read more about Kubernetes Ingress with AWS Load Balancer Controller here

Configurations

Make the following changes below to your anchore_values.yaml

Ingress

ingress:
  enabled: true
  apiPaths:
    - /v2/
    - /version/
  uiPath: /
  ingressClassName: alb
  annotations:
    # See https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/guide/ingress/annotations.md for further customization of annotations
    alb.ingress.kubernetes.io/scheme: internet-facing
  # If you do not plan to bring your own hostname (i.e. use the AWS supplied CNAME for the load balancer) then you can leave apiHosts & uiHosts as empty lists:
  apiHosts: []
  uiHosts: []
  # If you plan to bring your own hostname then you'll likely want to populate them as follows:
  # apiHosts:
  #   - anchore.mydomain.com
  # uiHosts:
  #   - anchore.mydomain.com

Note: There are alternative ways to access services within your EKS cluster besides ingress. It is used throughout this guide to expose the Anchore deployment on the public (or private) internet.

Anchore API Service

# Pod configuration for the anchore engine api service.
api:
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Note: Changed the service type to NodePort.

Anchore Enterprise UI

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

Note: Changed service type to NodePort.

Timeout Alignment between gunicorn & ALB

AWS ALB Connection idle timeout defaults to 60 seconds. The Anchore Helm charts have a timeout setting that defaults to 5 seconds which should be aligned with the ALB timeout setting. Sporatic HTTP 502 errors may be emitted by the ALB if the timeouts are not in alignment. Reference: https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2272

anchoreConfig:
  server:
    timeout_keep_alive: 65

Note: Changed timeout_keep_alive from 5 to 65 to align with the ALB’s default timeout of 60.

AWS EKS Configurations

ALB Ingress

Please reference https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/deploy/installation.md for installing and configuring AWS load balancer controller (fka alb-ingress-controller).

Anchore Enterprise Deployment

Create Secrets

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.

Create a Kubernetes secret containing your license file:

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing DockerHub credentials with access to the private Anchore Enterprise software:

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Deploy Anchore Enterprise:

helm repo add anchore https://charts.anchore.io

helm install anchore anchore/enterprise -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m

Run the following command for details on the deployed ingress resource:

$ kubectl describe ingress
Name:             anchore-enterprise
Namespace:        default
Address:          xxxxxxx-default-anchoreen-xxxx-xxxxxxxxx.us-east-1.elb.amazonaws.com
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     
        /v2/*   anchore-enterprise-api:8228 (192.168.42.122:8228)
        /*      anchore-enterprise-ui:80 (192.168.14.212:3000)
Annotations:
  alb.ingress.kubernetes.io/scheme:  internet-facing
  kubernetes.io/ingress.class:       alb
Events:
  Type    Reason  Age   From                    Message
  ----    ------  ----  ----                    -------
  Normal  CREATE  14m   alb-ingress-controller  LoadBalancer 904f0f3b-default-anchoreen-d4c9 created, ARN: arn:aws:elasticloadbalancing:us-east-1:077257324153:loadbalancer/app/904f0f3b-default-anchoreen-d4c9/4b0e9de48f13daac
  Normal  CREATE  14m   alb-ingress-controller  rule 1 created with conditions [{    Field: "path-pattern",    Values: ["/v2/*"]  }]
  Normal  CREATE  14m   alb-ingress-controller  rule 2 created with conditions [{    Field: "path-pattern",    Values: ["/*"]  }]

The output above shows that an ELB has been created. Navigate to the specified URL in a browser:

login

Anchore System

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://xxxxxx-default-anchoreen-xxxx-xxxxxxxxxx.us-east-1.elb.amazonaws.com ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with AnchoreCTL:

ANCHORECTL_URL=http://xxxxxx-default-anchoreen-xxxx-xxxxxxxxxx.us-east-1.elb.amazonaws.com ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Vulnerability Management section of our documentation for more information.

2.3.3 - Deploying Anchore Enterprise on Google Kubernetes Engine (GKE)

Get an understanding of deploying Anchore Enterprise on a Google Kubernetes Engine (GKE) cluster and exposing it on the public Internet.

Note when using Google Cloud, consider utilizing Cloud SQL for PostgreSQL as a managed database service.

Prerequisites

  • A running GKE cluster with worker nodes launched. See GKE Documentation for more information on this setup.
  • Helm client installed on local host.
  • AnchoreCTL installed on local host.

Once you have a GKE cluster up and running with worker nodes launched, you can verify it by using the followiing command.

$ kubectl get nodes
NAME                                                STATUS   ROLES    AGE   VERSION
gke-standard-cluster-1-default-pool-c04de8f1-hpk4   Ready    <none>   78s   v1.13.7-gke.24
gke-standard-cluster-1-default-pool-c04de8f1-m03k   Ready    <none>   79s   v1.13.7-gke.24
gke-standard-cluster-1-default-pool-c04de8f1-mz3q   Ready    <none>   78s   v1.13.7-gke.24

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:

  • Anchore Enterprise software
  • PostgreSQL (13 or higher)
  • Redis (4)

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore. The following is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on Google Kubernetes Engine.

Note: For this deployment, a GKE ingress controller will be used. You can read more about Kubernetes Ingress with a GKE Ingress Controller here

Configurations

Make the following changes below to your anchore_values.yaml

Ingress

ingress:
  enabled: true
  apiPaths:
    - /v2/*
  uiPath: /*

Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

Anchore API Service

api:
  replicaCount: 1
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Note: Changed the service type to NodePort

Anchore Enterprise UI

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

Note: Changed service type to NodePort.

Anchore Enterprise Deployment

Create Secrets

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.

Create a Kubernetes secret containing your license file:

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing DockerHub credentials with access to the private Anchore Enterprise software:

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Deploy Anchore Enterprise:

helm repo add anchore https://charts.anchore.io helm install anchore anchore/enterprise -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m

Run the following command for details on the deployed ingress resource:

$ kubectl describe ingress
Name:             anchore-enterprise
Namespace:        default
Address:          34.96.64.148
Default backend:  default-http-backend:80 (10.8.2.6:8080)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
        /v2/*   anchore-enterprise-api:8228 (<none>)
        /*      anchore-enterprise-ui:80 (<none>)
Annotations:
  kubernetes.io/ingress.class:            gce
  ingress.kubernetes.io/backends:         {"k8s-be-31175--55c0399dc5755377":"HEALTHY","k8s-be-31274--55c0399dc5755377":"HEALTHY","k8s-be-32037--55c0399dc5755377":"HEALTHY"}
  ingress.kubernetes.io/forwarding-rule:  k8s-fw-default-anchore-enterprise--55c0399dc5750
  ingress.kubernetes.io/target-proxy:     k8s-tp-default-anchore-enterprise--55c0399dc5750
  ingress.kubernetes.io/url-map:          k8s-um-default-anchore-enterprise--55c0399dc5750
Events:
  Type    Reason  Age   From                     Message
  ----    ------  ----  ----                     -------
  Normal  ADD     15m   loadbalancer-controller  default/anchore-enterprise
  Normal  CREATE  14m   loadbalancer-controller  ip: 34.96.64.148

The output above shows that an Load Balancer has been created. Navigate to the specified URL in a browser:

login

Anchore System

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://34.96.64.148 ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with Anchore CTL:

ANCHORECTL_URL=http://34.96.64.148 ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Vulnerability Management section of our documentation for more information.

2.3.4 - Deploying Anchore Enterprise on OpenShift

This document will walkthrough the deployment of Anchore Enterprise on an OpenShift Kubernetes Distribution (OKD) 3.11 cluster and expose it on the public internet.

Note: While this document walks through deploying on OKD 3.11, it has been successfully deployed and tested on OpenShift 4.2.4 and 4.2.7.

Prerequisites

  • A running OpenShift Kubernetes Distribution (OKD) 3.11 cluster. Read more about the installation requirements here.
    • Note: If deploying to a running OpenShift 4.2.4+ cluster, read more about the installation requirements here.
  • Helm client and server installed and configured with your cluster.
  • AnchoreCTL installed on local host.

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise installation of the chart will include the following:

  • Anchore Enterprise Software
  • PostgreSQL (13)
  • Redis 17

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore, this document is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on OKD 3.11.

OpenShift Configurations

Create a new project

Create a new project called anchore-enterprise:

oc new-project anchore-enterprise

Create secrets

Two secrets are required for an Anchore Enterprise deployment.

Create a secret for the license file: oc create secret generic anchore-enterprise-license --from-file=license.yaml=license.yaml

Create a secret for pulling the images: oc create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<username> --docker-password=<password> --docker-email=<email>

Verify these secrets are in the correct namespace: anchore-enterprise

oc describe secret <secret-name>

Link the above Docker registry secret to the default service account:

oc secrets link default anchore-enterprise-pullcreds --for=pull --namespace=anchore-enterprise

Verify this by running the following:

oc describe sa

Note: Validate your OpenShift SCC. Based on the security constraints of your environment, you may need to change SCC. oc adm policy add-scc-to-user anyuid -z default

Anchore Configurations

Create a custom anchore_values.yaml file for your Anchore Enterprise deployment:

# NOTE: This is not a production ready values file for an openshift deployment.

securityContext:
  fsGroup: null
  runAsGroup: null
  runAsUser: null
postgresql:
  primary:
    containerSecurityContext:
      enabled: false
    podSecurityContext:
      enabled: false
ui-redis:
  master:
    podSecurityContext:
      enabled: false
    containerSecurityContext:
      enabled: false

Install software

Run the following command to install the software:

helm repo add anchore https://charts.anchore.io helm install anchore -f values.yaml anchore/enterprise

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running oc get pods:

$ oc get pods
NAME                                                              READY     STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           1/1     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 1/1     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-enterprise-datasyncer-585997576d-2fgkg                    1/1     Running   0          13m
anchore-enterprise-reportsworker-6fb4f55455-f2ts2                 1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m

Create route objects

Create two route object in the OpenShift console to expose the UI and API services on the public internet:

Note: Route configuration is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

API Route

api-config

UI Route

ui-config

Routes

routes

Verify by navigating to the anchore-enterprise-ui route hostname:

ui

Anchore System

First you will need to retrieve the admin password. This is stored as a secret during the helm install process

oc get secret anchore-enterprise-env -o jsonpath='{.data.ANCHORE_ADMIN_PASSWORD}' -n anchore  | base64 -d

You can use customize your helm values.yaml file to use an existing / custom secrets rather than have help generate one for you with a generated password.

Verify API route hostname with AnchoreCTL:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://anchore-engine-anchore-enterprise.apps.54.84.147.202.nip.io \
ANCHORECTL_USERNAME=admin \
ANCHORECTL_PASSWORD=foobar \
anchorectl system status

#### Anchore Vulnerability data

Anchore has a datasyncer service that pulls the vulnerability and other data sources such as ClamAV malware database into your Anchore deployment. You can check on the status of these feed data using AnchoreCTL:

```shell
ANCHORECTL_URL=http://anchore-engine-anchore-enterprise.apps.54.84.147.202.nip.io \
ANCHORECTL_USERNAME=admin \
ANCHORECTL_PASSWORD=foobar \
anchorectl feed list

Note: Please continue to the Vulnerability Management section of our documentation for more information about Vulnerability Management within Anchore.

2.4 - Deploying AnchoreCTL

In this section you will learn how to deploy and configure AnchoreCTL, the Anchore Enterprise Command Line Interface.

AnchoreCTL is published as a simple binary available for download either from your Anchore Enterprise deployment or Anchore’s release site.

Using AnchoreCTL, you can manage and inspect all aspects of your Anchore Enterprise deployments, either as a manual human-readable configuration/instrumentation/control tool or as a CLI that is designed to be used in scripted environments such as CI/CD and other automation environments.

Installation

AnchoreCTL’s major and minor release version coincides with the release version of Anchore Enterprise, however patch versions may differ. For example,

  • Enterprise v5.14.0
  • AnchoreCTL v5.14.0

Important It is highly recommended that the version of AnchoreCTL you are using is supported by the deployed version of Enterprise. Please refer to the Enterprise Release Notes for the supported version of AnchoreCTL. See Local examples below where anchorectl can be downloaded from your Anchore Enterprise deployment.

MacOS / Linux

Download a local (from your Anchore deployment) or remote (from Anchore servers) version without installation:

Linux Intel/AMD64

# Local
curl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=linux&architecture=amd64" -H "accept: */*"
# Remote
curl -o anchorectl.tar.gz https://anchorectl-releases.anchore.io/anchorectl/v5.14.0/anchorectl_5.14.0_linux_amd64.tar.gz

MacOS Intel/AMD64

# Local
curl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=darwin&architecture=amd64" -H "accept: */*"
# Remote
curl -o anchorectl.tar.gz https://anchorectl-releases.anchore.io/anchorectl/v5.14.0/anchorectl_5.14.0_darwin_amd64.tar.gz

MacOS ARM/M-Series

# Local
curl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=darwin&architecture=arm64" -H "accept: */*"
# Remote
curl -o anchorectl.tar.gz https://anchorectl-releases.anchore.io/anchorectl/v5.14.0/anchorectl_5.14.0_darwin_arm64.tar.gz

Windows

For windows, you must specify the version of AnchoreCTL to download if using a script.

# Local
curl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=windows&architecture=amd64" -H "accept: */*"
# Remote
curl -o anchorectl.zip https://anchorectl-releases.anchore.io/anchorectl/v5.14.0/anchorectl_5.14.0_windows_amd64.zip

Installing a specific AnchoreCTL version

# Replace <DESTINATION_DIR> with /usr/local/bin (for example)
curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v5.14.0

Configuration

Once AnchoreCTL has been installed, learn about AnchoreCTL Configuration.

2.5 - Anchore Enterprise in an Air-Gapped Environment

Anchore Enterprise can run in an isolated environment with no outside internet connectivity. It does require a network connection to its own components and must be able to reach registries (Docker v2 API compatible) where the images to be analyzed are hosted.

Installation

Air-gapped deployment follows the standard deployment procedure for either Docker Compose or Kubernetes with Helm.

Data Synchronization

To ensure that the Anchore Enterprise installation has up-to-date vulnerability data from the vulnerability sources, you will need to periodically download and import feed data into your Anchore Enterprise deployment. Details on how to do this can be found in the Air-Gapped Configuration.

For more detail regarding the Anchore Data Service, please see Anchore Data Service.

3 - Integration

Integrations that Anchore Supports

Integration handling in Enterprise

With v5.11.0 release, Anchore Enterprise introduces an API so that the software entities (agents, plugins, etc.) that integrate external systems with Enterprise can be tracked and monitored.

As of v5.11.0, only the Kubernetes Inventory agent uses this API. Existing versions of agents and plugins will continue to work as before but will not be possible to track and monitor with the new functionality.

This new feature and its API has broadly two parts: integration registration and integration health reporting. These will be discussed further below.

Terminology

Integration instance: A software entity, like an agent or plugin, that integrates and external system with Anchore Enterprise. A deployed Kubernetes Inventory agent, Kubernetes Admission Controller, and ECS Inventory agent are all examples of integration instances.

Integration status: The (life-cycle) status of the integration instance as perceived by Enterprise. After registration is completed, this status is determined based on if health reports are received or not.

Reported status: The status of the integration instance as perceived by the integration instance itself. This is determined from the contents of the health reports, if they contain any errors or not.

Integration registration

When an integration instance that supports integration registration and health reporting is started it will perform registration with Anchore Enterprise. This is a kind of handshake where the integration instance introduces itself, declaring which type it is and presenting various other information about itself. In response, Anchore Enterprise provides the integration instance with the uuid that identifies the integration instance from that point onwards.

The registration request includes two identifiers: registration_id and registration_instance_id. Anchore Enterprise maintains a record of the association between integration uuid and <registration_id, registration_instance_id>.

If an integration instance is restarted, it will perform registration again. Assuming the <registration_id, registration_instance_id> pair in that re-registration remains the same as in the original registration, Enterprise will consider the integration instance to be the same (and thus provide the integration instance with the same uuid). Should the <registration_id, registration_instance_id> pair be different, then Enterprise will consider the integration instance to be different and assign it a new uuid.

Integrations deployed as multiple replicas

An integration can be deployed as multiple replicas. An example is the Kubernetes Inventory agent, which helm chart deploys it as a K8s Deployment. That deployment can be specified to have replicas > 1 (although it is not advisable as the agent is strictly speaking not implemented to be executed as multiple replicas, it will work but only add unnecessary load).

In such a case, each replica will have identical configuration. They will register as integration instances and be given their own uuid. By inspecting the registration_id and registration_instance_id it is often possible to determine if they instances are part of the same replica set. They will then have registered with identical registration_id but different registration_instance_id. The exception is if each integration instance self-generated a unique registration_id that they used during registration. In that case they cannot be identified to belong to the same replica set this way.

Integration health reporting

Once registered, an integration instance can send periodic health reports to Anchore Enterprise. The interval between two health reports can be configured to be 30 to 600 seconds. A default values will typically be 60 seconds.

Each health report includes a uuid that identifies it and timestamp when it was sent. These can be used when searching the integration instance’s log file during troubleshooting. The health report also includes the uptime of the integration instance as well as an ’errors’ property that contains errors that the integration wants to make Anchore Enterprise aware of. In addition to the above, health reports can also include data specific to the type of integration.

Reported status derived from health reports

When Anchore Enterprise receives a health report that contains errors from an integration instance, it will set that instance’s reportedStatus.state to unhealthy and the reportedStatus.healthReportUuid is set to the uuid of the health report. If subsequent health reports do no contain errors, the instance’s reportedStatus.state is set to healthy and the reportedStatus.healthReportUuid is unset.

This is an example of what the reported status can look like from an integration instance that sends health reports indicating errors:

{
    "reportedStatus": {
      "details": {
        "errors": [
          "unable to report Inventory to Anchore account account0: failed to report data to Anchore: \u0026{Status:4",
          "user account not found (account1) | ",
          "unable to report Inventory to Anchore account account2: failed to report data to Anchore: \u0026{Status:4",
          "user account not found (account3) | "
        ],
        "healthReportUuid": "d676f221-0cc7-485e-b909-a5a1dd8d244e"
      },
      "reason": "Health report included errors",
      "state": "unhealthy"
    }
}

The details.errors list indicates that there is some issues related to ‘account0’, ‘account1’, ‘account2’ and ‘account3’. To fully triage and troubleshoot these issues one will typically have to search the log file for the integration instance.

This is an example of reported status for case without errors:

{
    "reportedStatus": {
      "state": "healthy"
    }
}

The below figure illustrates how the reportedStatus.state property will transition between its states.

Integration status derived from health reports

When an integration instance registers with Anchore Enterprise, it will declare at what interval it will send health reports. A typical value will be 60 seconds.

As long as health reports are received from an integration instance, Enterprise will consider it to be active. This is reflected in the integration instance’s integrationStatus.state which is set to active.

If three (3) consecutive health reports fail to be received by Anchore Enterprise, it will set the integration instance’s integrationStatus.state to inactive.

This is an example of what the integration status can look like when health reports have not been received from an integration instance:

{
  "integrationStatus": {
    "reason": "Integration last_seen timestamp is older than 2024-10-21 15:33:07.534974",
    "state": "inactive",
    "updatedAt": "2024-10-21T15:46:07Z"
    }
}

A next step to triage this could be to check if the integration instance is actually running or if there is some network connectivity issue preventing health reports from being received.

This is an example of integration status when health reports are received as expected:

{
  "integrationStatus": {
    "state": "active",
    "updatedAt": "2024-10-21T08:21:01Z"
  }
}

The below figure illustrates how the integrationStatus.state will transition between its (lifecycle) states.

Integration instance properties

An integration instance has the below properties. Some properties may not have a value.

  • accountName: The account that integration instance used during registration (and thus belongs to).
  • accounts: List of account names that the integration instance handles. The list is updated from information contained in health reports from the integration instance. For the Kubernetes Inventory agent, this list holds all accounts that the agent has recently attempted to send inventory reports for (regardless if the attempt succeeded or not).
  • clusterName: The cluster where the integration instance executes. This will typically be a Kubernetes cluster.
  • description: Short arbitrary text description of the integration instance.
  • explicitlyAccountBound: List of account names that the integration instance is explicitly configured to handle. This does not include account names that an integration instance could learn dynamically. For instance, the Kubernetes Inventory agent can learn about account names to handle via a special label set on the namespaces. Such account names are not included in this property.
  • healthReportInterval: Interval in seconds between health reports from the integration instance.
  • integrationStatus: The (life cycle) status of the integration instance.
  • lastSeen: Timestamp when the last health report was received from the integration instance.
  • name: Name of the integration instance.
  • namespace: The namespace where the integration executes. This will typically be a Kubernetes namespace.
  • namespaces: List of namespaces that the integration is explicitly configured to handle.
  • registrationId: Registration id that the integration instance used during registration.
  • registrationInstanceId: Registration instance id that the integration instance used during registration.
  • "reportedStatus: The health status of the integration instance derived from information reported in the last health report.
  • startedAt: Timestamp when the integration instance was started.
  • type: The type of the integration instance. In Enterprise v5.11.0, k8s_inventory_agent is the only value.
  • uptime: Uptime (in seconds) of the integration instance.
  • username: Username that the integration instance registered using.
  • uuid: The UUID of the integration instance. Used in REST API to specify instance.
  • version: Software version that the integration instance runs.

3.1 - Container Registries via the API

Using the API or CLI, Anchore Enterprise can be instructed to download an image from a public or private container registry.

Anchore Enterprise will attempt to download images from any registry without requiring further configuration. However if your registry requires authentication then the registry and corresponding credentials will need to be defined. Anchore Enterprise can analyze images from any Docker V2 compatible registry.

alt text

Jump to the registry configuring guide for your registry:

3.1.1 - Amazon Elastic Container Registry

Amazon AWS typically uses keys instead of traditional usernames & passwords. These keys consist of an access key ID and a secret access key. While it is possible to use the aws ecr get-login command to create an access token, this will expire after 12 hours so it is not appropriate for use with Anchore Enterprise, otherwise a user would need to update their registry credentials regularly. So when adding an Amazon ECR registry to Anchore Enterprise you should pass the aws_access_key_id and aws_secret_access_key.


# ANCHORECTL_REGISTRY_PASSWORD=<MY_AWS_SECRET_ACCESS_KEY> anchorectl registry add 1234567890.dkr.ecr.us-east-1.amazonaws.com --username <MY_AWS_ACCESS_KEY_ID> --type awsecr

The registry-type parameter instructs Anchore Enterprise to handle these credentials as AWS credentials rather than traditional usernames and passwords. Currently Anchore Enterprise supports two types of registry authentication standard username and password for most Docker V2 registries and Amazon ECR. In this example we specified the registry type on the command line however if this parameter is omitted then AnchoreCTL will attempt to guess the registry type from the URL which uses a standard format.

Anchore Enterprise will use the AWS access key and secret access keys to generate authentication tokens to access the Amazon ECR registry, Anchore Enterprise will manage regeneration of these tokens which typically expire after 12 hours.

In addition to supporting AWS access key credentials Anchore also supports the use of IAM roles for authenticating with Amazon ECR if Anchore Enterprise is run on an EC2 instance.

In this case you can configure Anchore Enterprise to inherit the IAM role from the EC2 instance hosting the system.

When launching the EC2 instance that will run Anchore Enterprise you need to specify a role that includes the AmazonEC2ContainerRegistryReadOnly policy.

While this is best performed using a CloudFormation template, you can manually configure from the launch instance wizard.

Step 1: Select Create new IAM role.

logo

Step 2: Under type of trusted entity select EC2.

logo

Ensure that the AmazonEC2ContainerRegistryReadOnly policy is selected.

Step 3: Attach Permissions to the Role.

logo

Step 4: Name the role.

Give a name to the role and add this role to the Instance you are launching.

On the running EC2 instance you can manually verify that the instance has inherited the correct role by running the following command:

# curl http://169.254.169.254/latest/meta-data/iam/info
{
 "Code" : "Success",
 "LastUpdated" : "2018-01-1218:45:12Z",
 "InstanceProfileArn" : "arn:aws:iam::123456789012:instance-profile/ECR-ReadOnly",
 "InstanceProfileId" : "ABCDEFGHIJKLMNOP”
}

Step 5: Enable IAM Authentication in Anchore Enterprise.

By default the support for inheriting the IAM role is disabled.

To enable IAM based authentication add the following entry to the top of Anchore Enterprise config.yaml file:

allow_awsecr_iam_auto: True

Step 6: Add the Registry using the AWSAUTO user.

When IAM support is enabled instead of passing the access key and secret access key use “awsauto” for both username and password. This will instruct Anchore Enterprise to inherit the role from the underlying EC2 instance.


# ANCHORECTL_REGISTRY_PASSWORD=awsauto anchorectl registry add 1234567890.dkr.ecr.us-east-1.amazonaws.com --username awsauto --type awsecr

3.1.2 - Azure Container Registry

To use an Azure Registry, you can configure Anchore to use either the admin credential(s) or a service principal. Refer to Azure documentation for differences and how to setup each. When you’ve chosen a credential type, use the following to determine which registry command options correspond to each value for your credential type

  • Admin Account

    • Registry: The login server (Ex. myregistry1.azurecr.io)
    • Username: The username in the ‘az acr credential show –name ’ output
    • Password: The password or password2 value from the ‘az acr credential show’ command result
  • Service Principal

    • Registry: The login server (Ex. myregistry1.azurecr.io)
    • Username: The service principal app id
    • Password: The service principal password
      Note: You can follow Microsoft Documentation for creating a Service Principal.

To add an azure registry credential, invoke anchorectl as follows:

ANCHORECTL_REGISTRY_PASSWORD=<password> anchorectl registry add <registry> --username <username> <Password>

Once a registry has been added, any image that is added (e.g. anchorectl image add <Registry>/some/repo:sometag) will use the provided credential to download/inspect and analyze the image.

3.1.3 - Google Container Registry

When working with Google Container Registry it is recommended that you use JSON keys rather than the short lived access tokens.

JSON key files are long-lived and are tightly scoped to individual projects and resources. You can read more about JSON credentials in Google’s documentation at the following URL: Google Container Registry advanced authentication

Once a JSON key file has been created with permissions to read from the container registry then the registry should be added with the username _json_key and the password should be the contents of the key file.

In the following example a file named key.json in the current directory contains the JSON key with readonly access to the my-repo repository within the my-project Google Cloud project.


# ANCHORECTL_REGISTRY_PASSWORD="$(cat key.json)" anchorectl registry add us.gcr.io --username _json_key

3.1.4 - Harbor Container Registry

To use the Harbor registry, ensure you have the following credentials:

  1. Harbor URL: The base URL of your Harbor registry.
  2. Harbor Admin Username: The username for the Harbor administrator account.
  3. Harbor Admin Password: The corresponding password for the administrator account.

Invoke anchorectl as follows to add the registry

ANCHORECTL_REGISTRY_PASSWORD=Harbor12345 anchorectl registry add core.harbor.domain --username admin

Once a registry has been added, any image that is added (e.g. anchorectl image add /some/repo:sometag) will use the provided credential to download/inspect and analyze the image.

3.1.5 - Managing Registries

Anchore Enterprise will attempt to download images from any registry without requiring further configuration. However if your registry requires authentication then the registry and corresponding credentials will need to be defined.

Listing Registries

Running the following command lists the defined registries.

# anchorectl registry list
 ✔ Fetched registries
┌───────────────────┬───────────────┬───────────────┬─────────────────┬──────────────────────┬─────────────┬───────────────────┐
│ REGISTRY NAME     │ REGISTRY TYPE │ REGISTRY USER │ REGISTRY VERIFY │ CREATED AT           │ LAST UPATED │ REGISTRY          │
├───────────────────┼───────────────┼───────────────┼─────────────────┼──────────────────────┼─────────────┼───────────────────┤
│ docker.io         │ docker_v2     │ anchore       │ true            │ 2022-08-24T21:37:08Z │             │ docker.io         │
│ quay.io           │ docker_v2     │ anchore       │ true            │ 2022-08-25T20:55:33Z │             │ quay.io           │
│ 192.168.1.89:5000 │ docker_v2     │ johndoe       │ true            │ 2022-08-25T20:56:01Z │             │ 192.168.1.89:5000 │
└───────────────────┴───────────────┴───────────────┴─────────────────┴──────────────────────┴─────────────┴───────────────────┘

Here we can see that 3 registries have been defined. If no registry was defined Anchore Enterprise would attempt to pull images without authentication but a registry is defined then all pulls for images from that registry will use the specified username and password.

Adding a Registry

Registries can be added using the following syntax.

# ANCHORECTL_REGISTRY_PASSWORD=<password> anchorectl registry add <registry> --username <username>

The REGISTRY parameter should include the fully qualified hostname and port number of the registry. For example: registry.anchore.com:5000

Anchore Enterprise will only pull images from a TLS/SSL enabled registry. If the registry is protected with a self signed certificate or a certificated signed by an unknown certificate authority then the --secure-conection=<true|false> parameter can be passed which instructs Anchore Enterprise not to validate the certificate.

Most Docker V2 compatible registries require username and password for authentication. Amazon ECR, Google GCR and Microsoft Azure include support for their own native credentialing. See Working with AWS ECR Registry Credentials, Working with Google GCR Registry Credentials and Working with Azure Registry Credentials for more details.

Getting Registry Details

The registry get command allows the user to retrieve details about a specific registry.

For example:

# anchorectl registry get registry.example.com
 ✔ Fetched registry
┌──────────────────────┬───────────────┬───────────────┬─────────────────┬──────────────────────┬─────────────┬──────────────────────┐
│ REGISTRY NAME        │ REGISTRY TYPE │ REGISTRY USER │ REGISTRY VERIFY │ CREATED AT           │ LAST UPATED │ REGISTRY             │
├──────────────────────┼───────────────┼───────────────┼─────────────────┼──────────────────────┼─────────────┼──────────────────────┤
│ registry.example.com │ docker_v2     │ johndoe       │ false           │ 2022-08-25T20:58:33Z │             │ registry.example.com │
└──────────────────────┴───────────────┴───────────────┴─────────────────┴──────────────────────┴─────────────┴──────────────────────┘

In this example we can see that the registry.example.com registry was added to Anchore Enterprise on the 25th August at 20:58 UTC. The password for the registry cannot be retrieved through the API or AnchoreCTL.

Updating Registry Details

Once a registry had been defined the parameters can be updated using the update command. This allows a registry’s username, password and secure-connection (validate TLS) parameters to be updated using the same syntax as is used in the ‘add’ operation.

# ANCHORECTL_REGISTRY_PASSWORD=<newpassword> anchorectl registry update registry.example.com --username <newusername> --validate=<true|false> --secure-connection=<true|false>

Deleting Registries

A Registry can be deleted from Anchore’s configuration using the del command.

For example to delete the configuration for registry.example.com the following command should be issued:

# anchorectl registry delete registry.example.com
 ✔ Deleted registry
No results

Note: Deleting a registry record does not delete the records of images/tags associated with that registry.

Advanced

Anchore Enterprise attempts to perform a credential validation upon registry addition, but there are cases where a credential can be valid but the validation routine can fail (in particular, credential validation methods are changing for public registries over time). If you are unable to add a registry but believe that the credential you are providing is valid, or you wish to add a credential to anchore before it is in place in the registry, you can bypass the registry credential validation process using the --validate=false option to the registry add or registry update command.

3.2 - Configuring Registries via the GUI

Introduction

In this section you will learn how to configure access to registries within the Anchore Enterprise UI.

Assumptions

  • You have a running instance of Anchore Enterprise and access to the UI.
  • You have the appropriate permissions to list and create registries. This means you are either a user in the admin account, or a user that is already a member of the read-write role for your account.

The UI will attempt to download images from any registry without requiring further configuration. However, if your registry requires authentication then the registry and corresponding credentials will need to be defined.

First off, after a successful login, navigate to the Configuration tab in the main menu.

alt text

Add a New Registry

In order to define a registry and its credentials, navigate to the Registries tab within Configuration. If you have not yet defined any registries, select the Let’s add one! button. Otherwise, select the Add New Registry button on the right-hand side.

Upon selection, a modal will appear:

alt text

A few items will be required:

  • Registry
  • Type (e.g. docker_v2 or awsecr)
  • Username
  • Password

As the required field values may vary depending on the type of registry and credential options, they will be covered in more depth below. A couple additional options are also provided:

  • Allow Self Signed By default, the UI will only pull images from a TLS/SSL enabled registry. If the registry is protected with a self signed certificate or a certificate signed by an unknown certificate authority, you can enable this option by sliding the toggle to the right to instruct the UI not to validate the certificate.

  • Validate on Add Credential validation is attempted by default upon registry addition although there may be cases where a credential can be valid but the validation routine can fail (in particular, credential validation methods are changing for public registries over time). Disabling this option by sliding the toggle to the left will instruct the UI to bypass the validation process.

Once a registry has been successfully configured, its credentials as well as the options mentioned above can be updated by clicking Edit under the Actions column. For more information on analyzing images with your newly defined registry, refer to: UI - Analyzing Images.

alt text

The instructions provided below for setting up the various registry types can also be seen inline by clicking ‘Need some help setting up your registry?’ near the bottom of the modal.

Docker V2 Registry

Regular docker v2 registries include dockerhub, quay.io, artifactory, docker registry v2 container, redhat public container registry, and many others. Generally, if you can execute a ‘docker login’ with a pair of credentials, Anchore can use those.

  • Registry Hostname or IP of your registry endpoint, with an optional port Ex: docker.io, mydocker.com:5000, 192.168.1.20:5000

  • Type Set this to docker_v2

  • Username Username that has access to the registry

  • Password Password for the specified user

Amazon Web Services Registry (AWS ECR)

  • Registry The ECR endpoint hostname Ex: 123456789012.dkr.ecr.us-east-1.amazonaws.com

  • Type Set this to awsecr

For Username and Password, there are three different modes that require different settings when adding an ECR registry, depending on where your Anchore Enterprise is running and how your AWS IAM settings are configured to allow access to a given ECR registry.

  1. API Keys Provide access/secret keys from an account or IAM user. We highly recommend using a dedicated IAM user with specific access restrictions for this mode.

    • Username AWS access key

    • Password AWS secret key

  2. Local Credentials Uses the AWS credentials found in the local execution environment for Anchore Enterprise (Ex. env vars, ~/.aws/credentials, or instance profile).

    • Username Set this to awsauto

    • Password Set this to awsauto

  3. ECR Assume Role To have Anchore Enterprise assume a specific role different from the role it currently runs within, specify a different role ARN. Anchore Enterprise will use the execution role (as in iamauto mode from the instance/task profile) to assume a different role. The execution role must have permissions to assume the role requested.

    • Username Set this to _iam_role

    • Password The desired role’s ARN

For more information, see: Working with Amazon ECR Registry Credentials

Google Container Registry (GCR)

When working with Google Container Registry, it is recommended that you use service account JSON keys rather than the short lived access tokens. Learn more about how to generate a JSON key here.

  • Registry GCR registry hostname endpoint Ex: gcr.io, us.gcr.io, eu.gcr.io, asia.gcr.io

  • Type Set this to docker_v2

  • Username Set this to _json_key

  • Password Full JSON string of your JSOn key (the content of the key.json file you got from GCR)

For more information, see: Working with Google Container Registry (GCR) Credentials

Microsoft Azure Registry

To use an Azure Registry, you can configure Anchore to use either the admin credential(s) or a service principal. Refer to Azure documentation for differences and how to setup each.

  • Registry The login server Ex. myregistry1.azurecr.io

  • Type Set this to docker_v2

  1. Admin Account Username The username in the ‘az acr credentials show –name ’ output

    Password The password or password2 value from the ‘az acr credentials show’ command result

  2. Service Principal Username The service principal app id

    Password The service principal password

For more information, see: Working with Azure Registry Credentials

Harbor Registry

To use a Harbor Registry, you will need to provide the Harbor registry URL, along with your Harbor username and password. Ensure the Type is set to docker_v2.

Registry The login server Ex. core.harbor.domain

  • Type Set this to docker_v2

  • Harbor Log in Username The username you use to sign in to Harbor (e.g., admin).

    Password The password you use to log in to Harbor (e.g., Harbor12345).

3.3 - CI / CD Integration

Anchore Enterprise can be integrated into CI/CD systems such as Jenkins, GitHub, or GitLab to secure pipelines by adding automatic scanning.

If an artifact does not pass the policy checks then users can configure either a gating workflow which fails the build or allow the pipeline to continue with a warning to the build job owner. Notifications can be handled via the CI/CD system itself or using Anchore’s native notification system and can provide information about the CVEs discovered and the complete policy analysis. Images that pass the policy check can be promoted to the production registry.

There are two ways to use CI/CD with Anchore: distributed mode or centralized mode. Both modes work with any CI/CD system as long as the AnchoreCTL binary can be installed and run, or you can access the Enterprise APIs directly.

Distributed mode

The build job invokes a tool called AnchoreCTL locally on the CI/CD runner to generate both data and metadata about the artifact being scanned, such as source code or a container image, in the form of a software bill of materials (SBOM). The SBOM is then passed to Anchore Enterprise for analysis. The policy analysis can look for known CVEs, exposed secrets, incorrect configurations, licenses, and more.

Centralized mode

The build job will upload the container image to a repo and then request Anchore Enterprise pulls it down, generate the SBOM on the backend, and return the policy analysis result.

Requirements

Anchore Enterprise is deployed in your environment with the API accessible from your pipeline runner. Centralized Mode: Credentials for your container registry are added to Anchore Enterprise, under the Anchore account that you intend to use with this pipeline. See Registries. For information on what registry/credentials must be added to allow Anchore Enterprise to access your container registry, refer to your container registry’s documentation.

Further Reading

To learn more about distributed and centralized modes, please review the Analyzing Images via CTL documentation.

3.3.1 - GitLab

Requirements

  1. Anchore Enterprise is deployed in your environment, with the API accessible from your GitLab CI environment.
  2. Credentials for your GitLab Container Registry are added to Anchore Enterprise, under the Anchore account that you intend to use with GitLab CI. See Registries. For information on what registry/credentials must be added to allow Anchore Enterprise to access your GitLab Container Registry, see https://docs.gitlab.com/ee/user/packages/container_registry/.

1. Configure Variables

Ensure that the following variables are set in your GitLab repository (settings -> CI/CD -> Variables -> Expand -> Add variable):

ANCHORECTL_USERNAME  (protected)
ANCHORECTL_PASSWORD (protected and masked)
ANCHORECTL_URL (protected)

Note Gitlab has a minimum length of 8 for variables. Please ensure both your username and password meet this requirement.

Set Variables

2. Create config file

Create a new file in your repository. Name the file .gitlab-ci.yml.

Set Variables

3. Configure scanning mode

a) Distributed Mode

This is the most easily scalable method for scanning images. Distributed scanning uses the anchorectl utility to build the SBOM directly on the build runner and then pushes the SBOM to Anchore Enterprise through the API. To use this scanning method, paste the following workflow script into your new .gitlab-ci.yml file. After building the image from your Dockerfile and scanning it with anchorectl, this workflow will display vulnerabilities and policy results in the build log. After pasting, click “Commit changes” to save the new file.

### Anchore Distributed Scan
  # you will need three variables defined:
  # ANCHORECTL_USERNAME
  # ANCHORECTL_PASSWORD
  # ANCHORECTL_URL

image: docker:latest
services:
- docker:dind
stages:
- build
- anchore
variables:
  ANCHORECTL_FAIL_BASED_ON_RESULTS: "false"
  ANCHORE_IMAGE: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}

Build:
  stage: build
  script:
    ### build and push docker image
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
    - docker build -t ${ANCHORE_IMAGE} .
    - docker push ${ANCHORE_IMAGE}

Anchore:
  stage: anchore
  script:
    ### install anchorectl binary
    - apk add --no-cache curl
    - curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin ${ANCHORECTL_VERSION}
    - export PATH="${HOME}/.local/bin/:${PATH}"
    ### scan image and push to anchore enterprise
    - anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile --from registry ${ANCHORE_IMAGE} 
    ### then get the results:
    - anchorectl image vulnerabilities ${ANCHORE_IMAGE}
    - anchorectl image check --detail ${ANCHORE_IMAGE}

b) Centralized Mode

This method uses the “analyzer” pods in the Anchore Enterprise deployment to build the SBOM. This can create queuing if there are not enough analyzer processes, and this method may require the operator to provide registry credentials in the Enterprise backend (if the images to be scanned are in private registries). This method may be preferred in cases where the Anchore Enterprise operator does not control the image build process (the analyzers can simply poll registries to look for new image builds as they are pushed), and this method also allows the operator to simply queue up the image for asynchronous scanning later if vulnerability and policy results are not required immediately. If the user wants malware scanning results from Anchore Enterprise’s clamav integration, the Centralized Scanning method is required. To use this scanning method, paste the following workflow script into your new .gitlab-ci.yml file. After building the image from your Dockerfile,, this workflow will tell Anchore Enterprise to scan the image, then it will display the vulnerability and policy results in the build log. After pasting, click “Commit changes” to save the new file.

### Anchore Centralized Scan
  # you will need three variables defined:
  # ANCHORECTL_USERNAME
  # ANCHORECTL_PASSWORD
  # ANCHORECTL_URL

image: docker:latest
services:
- docker:dind
stages:
- build
- anchore
variables:
  ANCHORECTL_FAIL_BASED_ON_RESULTS: "false"
  ANCHORE_IMAGE: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}

Build:
  stage: build
  script:
    ### build and push docker image
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
    - docker build -t ${ANCHORE_IMAGE} .
    - docker push ${ANCHORE_IMAGE}

Anchore:
  stage: anchore
  script:
    ### install anchorectl binary
    - apk add --no-cache curl
    - curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin ${ANCHORECTL_VERSION}
    - export PATH="${HOME}/.local/bin/:${PATH}"
    ### queue image for scanning
    - anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile ${ANCHORE_IMAGE} 
    ### then get the results:
    - anchorectl image vulnerabilities ${ANCHORE_IMAGE}
    - anchorectl image check --detail ${ANCHORE_IMAGE}

4. View pipeline

Gitlab will automatically start a pipeline. Navigate to “Build” -> “Pipelines” and then on your running pipeline.

Set Variables

5. View output

Once the build is complete, click on the “anchore” stage and view the output of the job. You will see the results of the vulnerability match and policy evaluation in the output.

3.3.2 - GitHub

Image Scanning can be easily integrated into your GitHub Actions pipeline using anchorectl.

1. Configure Variables

Ensure that the following variables/secrets are set in your GitHub repository (repository settings -> secrets and variables -> actions):

  • Variable ANCHORECTL_URL
  • Variable ANCHORECTL_USERNAME
  • Secret ANCHORECTL_PASSWORD

These are necessary for the integration to access your Anchore Enterprise deployment. The ANCHORECTL_PASSWORD value should be created as a repository secret to prevent exposure of the value in job logs, while ANCHORECTL_URL and ANCHORECTL_USERNAME can be created as repository variables.

Set Variables

2. Configure Permissions

(“Settings” -> “Actions” -> “General” -> “Workflow permissions”) select “Read and write permissions” and click “Save”.

Set Variables

3. Create config file

In your repository, create a new file ( “Add file” -> “Create new file”) and name it .github/workflows/anchorectl.yaml.

Set Variables

4. Set scanning mode

a) Distributed Mode

This is the most easily scalable method for scanning images. Distributed scanning uses the anchorectl utility to build the SBOM directly on the build runner and then pushes the SBOM to Anchore Enterprise through the API. To use this scanning method, paste the following workflow script into your new anchorectl.yaml file. After building the image from your Dockerfile and scanning it with anchorectl, this workflow will display vulnerabilities and policy results in the build log.

name: Anchore Enterprise Distributed Scan

on:
  workflow_dispatch:
    inputs:
      mode:
        description: 'On-Demand Build'  
        
env:
  ANCHORECTL_URL: ${{ vars.ANCHORECTL_URL }}
  ANCHORECTL_USERNAME: ${{ vars.ANCHORECTL_USERNAME }}
  ANCHORECTL_PASSWORD: ${{ secrets.ANCHORECTL_PASSWORD }}
  ## set ANCHORECTL_FAIL_BASED_ON_RESULTS to true if you want to break the pipeline based on the evaluation
  ANCHORECTL_FAIL_BASED_ON_RESULTS: false
  REGISTRY: ghcr.io
     
jobs:
  Build:
    runs-on: ubuntu-latest
    steps:
    
    - name: "Set IMAGE environmental variables"
      run: |
        echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
        
    - name: Checkout Code
      uses: actions/checkout@v3
      
    - name: Log in to the Container registry
      uses: docker/login-action@v2
      with:
        registry: ${{ env.REGISTRY }}
        username: ${{ github.actor }}
        password: ${{ secrets.GITHUB_TOKEN }}      
      
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2


    - name: build local container
      uses: docker/build-push-action@v3
      with:
        tags: ${{ env.IMAGE }}
        push: true
        load: false


  Anchore:
    runs-on: ubuntu-latest
    needs: Build
    steps:
    
    - name: "Set IMAGE environmental variables"
      run: |
        echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
        
    - name: Checkout Code
      ### only need to do this if you want to pass the dockerfile to Anchore during scanning
      uses: actions/checkout@v3
        
    - name: Install Latest anchorectl Binary
      run: |
        curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin v1.6.0
        export PATH="${HOME}/.local/bin/:${PATH}"
            
    - name: Generate SBOM and Push to Anchore
      run: |        
        anchorectl image add --no-auto-subscribe --wait --from registry --dockerfile Dockerfile ${IMAGE}
        
    - name: Pull Vulnerability List
      run: |
        anchorectl image vulnerabilities ${IMAGE} 
        
    - name: Pull Policy Evaluation
      run: |
        # set "ANCHORECTL_FAIL_BASED_ON_RESULTS=true" (see above in the "env:" section) to break the pipeline here if the 
        # policy evaluation returns FAIL or add -f, --fail-based-on-results to this command for the same result
        #
        anchorectl image check --detail ${IMAGE}

b) Centralized Mode

This method uses the “analyzer” pods in the Anchore Enterprise deployment to build the SBOM. This can create queuing if there are not enough analyzer processes, and this method may require the operator to provide registry credentials in the Enterprise backend (if the images to be scanned are in private registries). This method may be preferred in cases where the Anchore Enterprise operator does not control the image build process (the analyzers can simply poll registries to look for new image builds as they are pushed), and this method also allows the operator to simply queue up the image for asynchronous scanning later if vulnerability and policy results are not required immediately. If the user wants malware scanning results from Anchore Enterprise’s clamav integration, the Centralized Scanning method is required. To use this scanning method, paste the following workflow script into your new anchorectl.yaml file. After building the image from your Dockerfile,, this workflow will tell Anchore Enterprise to scan the image, then it will display the vulnerability and policy results in the build log.

name: Anchore Enterprise Centralized Scan

on:
  workflow_dispatch:
    inputs:
      mode:
        description: 'On-Demand Build'  

env:
  ANCHORECTL_URL: ${{ vars.ANCHORECTL_URL }}
  ANCHORECTL_USERNAME: ${{ vars.ANCHORECTL_USERNAME }}
  ANCHORECTL_PASSWORD: ${{ secrets.ANCHORECTL_PASSWORD }}
  ## set ANCHORECTL_FAIL_BASED_ON_RESULTS to true if you want to break the pipeline based on the evaluation
  ANCHORECTL_FAIL_BASED_ON_RESULTS: false
  REGISTRY: ghcr.io

jobs:

  Build:
    runs-on: ubuntu-latest
    steps:
    
    - name: "Set IMAGE environmental variables"
      run: |
        echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
        
    - name: Checkout Code
      uses: actions/checkout@v3
      
    - name: Log in to the Container registry
      uses: docker/login-action@v2
      with:
        registry: ${{ env.REGISTRY }}
        username: ${{ github.actor }}
        password: ${{ secrets.GITHUB_TOKEN }}      
      
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2


    - name: build local container
      uses: docker/build-push-action@v3
      with:
        tags: ${{ env.IMAGE }}
        push: true
        load: false

  Anchore:
    runs-on: ubuntu-latest
    needs: Build

    steps:
    
    - name: "Set IMAGE environmental variables"
      run: |
        echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
        
    - name: Checkout Code
      uses: actions/checkout@v3
        
    - name: Install Latest anchorectl Binary
      run: |
        curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin 
        export PATH="${HOME}/.local/bin/:${PATH}"
    
    - name: Queue Image for Scanning by Anchore Enterprise
      run: |        
       anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile ${IMAGE} 
                
    - name: Pull Vulnerability List
      run: |
        anchorectl image vulnerabilities ${IMAGE} 
        
    - name: Pull Policy Evaluation
      run: |
        # set "ANCHORECTL_FAIL_BASED_ON_RESULTS=true" (see above in the "env:" section) to break the pipeline here if the 
        # policy evaluation returns FAIL or add -f, --fail-based-on-results to this command for the same result
        #
        anchorectl image check --detail ${IMAGE}

5. Run Workflow

Go to “Actions” -> “Anchore Enterprise with anchorectl” and hit “Run workflow”.

Set Variables

6. View Results

When the workflow completes, view the results by clicking on the workflow name (“Anchore Enterprise with anchorectl”), then on the job (“Anchore”), then expand the “Pull Vulnerability List” and/or “Pull Policy Evaluation” steps to see the details.

Set Variables

7. Notifications

You can also integrate your Anchore deployment with the GitHub API so that Anchore notifications are sent to GitHub Notifications as new issues in a repository.

To configure and enable this please review the GitHub Notifications documentation.

3.3.3 - Jenkins

1. Configure variables

Ensure that the following credentials are set in in your Jenkins instance (Dashboard -> Manage Jenkins -> Credentials) as credential type “secret text”:

ANCHORECTL_USERNAME 
ANCHORECTL_PASSWORD
ANCHORECTL_URL

These are necessary for the integration to access your Anchore Enterprise deployment. The ANCHORECTL_PASSWORD value should be created as a repository secret to prevent exposure of the value in job logs, while ANCHORECTL_URL and ANCHORECTL_USERNAME can be created as repository variables.

2. Configure scanning mode

a) Distributed

This is the most easily scalable method for scanning images. Distributed scanning uses the anchorectl utility to build the SBOM directly on the build runner and then pushes the SBOM to Anchore Enterprise through the API. To use this scanning method, paste the following stage anywhere after your target container image has been built:

stage('Analyze Image w/ anchorectl') {
      environment {
        ANCHORECTL_URL = credentials("Anchorectl_Url")
        ANCHORECTL_USERNAME = credentials("Anchorectl_Username")
        ANCHORECTL_PASSWORD = credentials("Anchorectl_Password")
        // change ANCHORECTL_FAIL_BASED_ON_RESULTS to "true" if you want to break on policy violations
        ANCHORECTL_FAIL_BASED_ON_RESULTS = "false"
      }
      steps {
        script {
          sh """
            ### install latest anchorectl 
            curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b $HOME/.local/bin 
            export PATH="$HOME/.local/bin/:$PATH"          
            #
            ### actually add the image to the queue to be scanned
            #
            ### --wait tells anchorectl to block until the scan 
            ### is complete (this isn't always necessary but if 
            ### you want to pull the vulnerability list and/or 
            ### policy report, you need to wait
            #
            anchorectl image add --wait --from registry  ${REGISTRY}/${REPOSITORY}:${TAG}
            #
            ### pull vulnerability list (optional)
            anchorectl image vulnerabilities ${REGISTRY}/${REPOSITORY}:${TAG}
            ###
            ### check policy evaluation (omit –detail if you just
            ### want a pass/fail determination)
            anchorectl image check --detail ${REGISTRY}/${REPOSITORY}:${TAG}
            ### 
            ### if you want to break the pipeline on a policy violation, add "--fail-based-on-results"
            ### or change the ANCHORECTL_FAIL_BASE_ON_RESULTS variable above to "true"
          """
        } // end script 
      } // end steps
    } // end stage "analyze with anchorectl"

b ) Centralized

Centralized Scanning: this method uses the “analyzer” pods in the Anchore Enterprise deployment to build the SBOM. This can create queuing if there are not enough analyzer processes, and this method may require the operator to provide registry credentials in the Enterprise backend (if the images to be scanned are in private registries). This method may be preferred in cases where the Anchore Enterprise operator does not control the image build process (the analyzers can simply poll registries to look for new image builds as they are pushed), and this method also allows the operator to simply queue up the image for asynchronous scanning later if vulnerability and policy results are not required immediately. If the user wants malware scanning results from Anchore Enterprise’s clamav integration, the Centralized Scanning method is required. To use this scanning method, paste the following stage anywhere after your target container image has been built. After building the image from your Dockerfile, this stage will tell Anchore Enterprise to scan the image, then it will display the vulnerability and policy results in the build log.

stage('Analyze Image w/ anchorectl') {
      environment {
        ANCHORECTL_URL = credentials("Anchorectl_Url")
        ANCHORECTL_USERNAME = credentials("Anchorectl_Username")
        ANCHORECTL_PASSWORD = credentials("Anchorectl_Password")
        // change ANCHORECTL_FAIL_BASED_ON_RESULTS to "true" if you want to break on policy violations
        ANCHORECTL_FAIL_BASED_ON_RESULTS = "false"
      }
      steps {
        script {
          sh """
            ### install latest anchorectl 
            curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b $HOME/.local/bin 
            export PATH="$HOME/.local/bin/:$PATH"          
            #
            ### actually add the image to the queue to be scanned
            #
            ### --wait tells anchorectl to block until the scan 
            ### is complete (this isn't always necessary but if 
            ### you want to pull the vulnerability list and/or 
            ### policy report, you need to wait
            #
            anchorectl image add --wait ${REGISTRY}/${REPOSITORY}:${TAG}
            #
            ### pull vulnerability list (optional)
            anchorectl image vulnerabilities ${REGISTRY}/${REPOSITORY}:${TAG}
            ###
            ### check policy evaluation (omit –detail if you just
            ### want a pass/fail determination)
            anchorectl image check --detail ${REGISTRY}/${REPOSITORY}:${TAG}
            ### 
            ### if you want to break the pipeline on a policy violation, add "--fail-based-on-results"
            ### or change the ANCHORECTL_FAIL_BASE_ON_RESULTS variable above to "true"
          """
        } // end script 
      } // end steps
    } // end stage "analyze with anchorectl"

3.4.1 - Kubernetes Admission Controller

For the download of the tool see - Kubernetes Admission Controller. You will also find some more detailed information on both the configuration and use of the admission controller in the README.

For installation instructions see - Kubernetes Installation

Anchore Enterprise can be integrated with Kubernetes to ensure that only certified images are started within a Kubernetes POD.

Kubernetes can be configured to use an Admission Controller to validate that the container image is compliant with the user’s policy before allowing or preventing deployment.

The admission controller can be configured to make a webhook call into Anchore Enterprise. Anchore Enterprise exports a Kubernetes-specific API endpoint and will return the pass of fail response in the form of an ImageReview response.

alt text

This approach allows the Kubernetes system to make the final decision on running a container image and does not require installation of any per-node plugins into Kubernetes.

The Anchore admission controller supports 3 different modes of operation allowing you to tune the tradeoff between control and intrusiveness for your environments.

  • Strict Policy-Based Admission Gating Mode: This is the strictest mode, and will admit only images that are already analyzed by Anchore and receive a “pass” on policy evaluation. This enables you to ensure, for example, that no image is deployed into the cluster that has a known high-severity CVE with an available fix, or any of several other conditions.

  • Analysis-Based Admission Gating Mode: Admit only images that are analyzed and known to Anchore, but do not execute or require a policy evaluation. This is useful in cases where you’d like to enforce the requirement that all images be deployed via a CI/CD pipeline, providing peace of mind that your image has been analyzed, but allowing the pipeline to determine what should run based on factors other than the image’s final policy evaluation.

  • Passive Analysis Trigger Mode: Trigger an Anchore analysis of images, but to no block execution on analysis completion or policy evaluation of the image. This is a way to ensure that all images that make it to deployment (test, staging, or prod) are guaranteed to have some form of analysis audit trail available and a presence in reports and notifications that are managed by Anchore.

Using native Kubernetes features allows the admission controller approach to be used in both on-prem and cloud-hosted Kubernetes environments.

3.4.2 - Kubernetes Runtime Inventory

Overview

Anchore uses a go binary called anchore-k8s-inventory that leverages the Kubernetes Go SDK to reach out and list containers in a configurable set of namespaces to determine which images are running.

anchore-k8s-inventory can be deployed via its helm chart, embedded within your Kubernetes cluster as an agent. It will require access to the Anchore API.

Deployment

The most common way to track inventory is to install anchore-k8s-inventory as an agent in your cluster. To do this you will need to configure credentials and information about your deployment in the values file. It is recommended to first configure a specific robot user for the account where you’ll want to track your Kubernetes inventory.

As an agent anchore-k8s-inventory is installed using helm and the helm chart is hosted as part of the https://charts.anchore.io repo. It is based on the anchore/k8s-inventory docker image.

To install the helm chart, follow these steps:

  1. Configure your username, password, Anchore URL and cluster name in the values file.
k8sInventory:
  # Path should not be changed, cluster value is used to tell Anchore which cluster this inventory is coming from
  kubeconfig:
    cluster: <unique-name-for-your-cluster>

  anchore:
    url: <URL for your>

    # Note: recommend using the inventory-agent role
    user: <user>
    password: <password>
  1. Run helm install in the cluster(s) you wish to track
$ helm repo add anchore https://charts.anchore.io
$ helm install <release> -f <values.yaml> anchore/k8s-inventory

anchore-k8s-inventory must be able to resolve the Anchore URL and requires API credentials. Review the anchore-k8s-inventory logs if you are not able to see the inventory results in the UI.

Note: the Anchore API Password can be provided via a Kubernetes secret, or injected into the environment of the anchore-k8s-inventory container

  • For injecting the environment variable, see: injectSecretsViaEnv
  • For providing your own secret for the Anchore API Password, see: useExistingSecret. K8s Inventory creates it’s own secret based on your values.yaml file for key k8sInventory.anchore.password, but the k8sInventory.useExistingSecret key allows you to create your own secret and provide it in the values file. See the K8s Inventory repo for more information about the K8s Inventory specific configuration

Usage

To verify that you are tracking Kubernetes Inventory you can access inventory results with the command anchorectl inventory list and look for results where the TYPE is kubernetes.

The UI also displays the Kubernetes Inventory and allows operators to visually navigate the images, vulnerability results, and see the results of the policy evaluation.

For more details about watching clusters, and reviewing policy results see the Using Kubernetes Inventory section.

General Runtime Management

See Data Management

3.5 - ECS

Overview

Anchore uses a go binary called anchore-ecs-inventory that leverages the AWS Go SDK to gather an inventory of containers and their images running on ECS and report back to Anchore.

Deployment

Via Helm Chart

You can install the chart via:

helm repo add anchore https://charts.anchore.io
helm install <release-name> -f <values.yaml> anchore/ecs-inventory

A basic values file can always be found here. The key configurations are in the ecsInventory section.

Anchore ECS Inventory creates it’s own secret based on your values.yaml file for the following keys that are required for successfully deploying and connecting the ecs-inventory service to the Anchore Platform and AWS ECS Service:

  • ecsInventory.awsAccessKeyId
  • ecsInventory.awsSecretAccessKey

Using your own secrets

The (ecsInventory.useExistingSecret and ecsInventory.existingSecretName) or ecsInventory.injectSecretsViaEnv keys allows you to create your own secret and provide it in the values file or place the required secret into the pod via different means such as injecting the secrets into the pod using hashicorp vault.

For example:

  • Create a secret in kubernetes:

    apiVersion: v1
    kind: Secret
    metadata:
      name: ecs-inventory-secrets
    type: Opaque
    stringData:
      ANCHORE_ECS_INVENTORY_ANCHORE_PASSWORD: foobar
      AWS_ACCESS_KEY_ID: someKeyId
      AWS_SECRET_ACCESS_KEY: someSecretAccessKey
    
  • Provide it to the helm chart via the values file:

    ecsInventory:
        useExistingSecret: true
        existingSecretName: "ecs-inventory-secrets"
    

The Anchore API Password and required AWS secret values can also be injected into the environment of the ecs-inventory container. For injecting the environment variable

# set
ecsInventory:
  injectSecretsViaEnv=true

See the ecs-inventory repo for more information about the ECS Inventory specific configuration

Via ECS

It is also possible to deploy the ecs-inventory container on ECS. Here is an sample task definition that could be used to deploy ecs-inventory with a default configuration:

{
    "family": "anchore-ecs-inventory-example-task-definition",
    "containerDefinitions": [
        {
            "name": "ecs-inventory",
            "image": "docker.io/anchore/ecs-inventory:latest",
            "cpu": 0,
            "essential": true,
            "environment": [
                {
                    "name": "ANCHORE_ECS_INVENTORY_ANCHORE_URL",
                    "value": "https://anchore.url"
                },
                {
                    "name": "ANCHORE_ECS_INVENTORY_ANCHORE_USER",
                    "value": "admin"
                },
                {
                    "name": "ANCHORE_ECS_INVENTORY_ANCHORE_ACCOUNT",
                    "value": "admin"
                },
                {
                    "name": "ANCHORE_ECS_INVENTORY_REGION",
                    "value": "us-east-2"
                }
            ],
            "secrets": [
                {
                    "name": "ANCHORE_ECS_INVENTORY_ANCHORE_PASSWORD",
                    "valueFrom": "arn:aws:ssm:${region}:${aws_account_id}:parameter/ANCHORE_ADMIN_PASS"
                },
                {
                    "name": "AWS_ACCESS_KEY_ID",
                    "valueFrom": "arn:aws:ssm:${region}:${aws_account_id}:parameter/ECS_INVENTORY_AWS_ACCESS_KEY_ID"
                },
                {
                    "name": "AWS_SECRET_ACCESS_KEY",
                    "valueFrom": "arn:aws:ssm:${region}:${aws_account_id}:parameter/ECS_INVENTORY_AWS_SECRET_ACCESS_KEY"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "/anchore/ecs-inventory",
                    "awslogs-region": "us-east-2",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "executionRoleArn": "arn:aws:iam::${aws_account_id}:role/ecsTaskExecutionRole",
    "networkMode": "awsvpc",
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "512",
    "memory": "1024",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    }

Usage

To verify that you are tracking ECS Inventory you can access inventory results with the command anchorectl inventory list and look for results where the TYPE is ecs.

Watching ECS Inventory to auto analyze

It is possible to create a subscription to watch for new ECS Inventory that is reported to Anchore and automatically schedule those images for analysis.

1. Create the subscription

A subscription can be created by sending a POST to /v1/subscriptions with the following payload:

{
  "subscription_key": "<SUBSCRIPTION_KEY>",
  "subscription_type": "runtime_inventory"
}

Curl example:

curl -X POST -u USERNAME:PASSWORD --url ANCHORE_URL/v1/subscriptions --header 'Content-Type: application/json' --data '{
  "subscription_key": "arn:aws:ecs:eu-west-2:123456789012:cluster/myclustername",
  "subscription_type": "runtime_inventory"
}'

The subscription_key can be set to any part of an ECS ClusterARN. For example setting the subscription_key to the:

  • full ClusterARN arn:aws:ecs:us-east-1:012345678910:cluster/telemetry will create a subscription that only watches this cluster
  • partial ClusterARN arn:aws:ecs:eu-west-2:988505687240 will result in a subscription that watches every cluster within the account 988505687240

2. Activate the subscription

After a subscription has been created it needs to be activated. This can be achieved with anchorectl.

anchorectl subscription activate <SUBSCRIPTION_KEY> runtime_inventory

General Runtime Management

See Data Management

3.6 - Git for Source Code

Use anchorectl to generate a software bill of materials (SBOM) and import a source repository artifact from a file location on disk. You can also get information about the source repository, investigate vulnerability packages by requesting vulnerabilities for a single analyzed source repository, or get any policy evaluations. The workflow would generally be as follows.

  1. Generate an SBOM. The format is similar to the following: syft <path> -o json > <resulting filename>.json For example:
$ syft dir:/path/to/your/source/code -o json > my_sbom.json
  1. Import the SBOM from a source with metadata. This would normally occur as part of a CI/CD pipeline, and the various metadata would be programmatically added via environment variables. The response from anchorectl includes the new ID of the Source in Anchore Enterprise. For example:
# anchorectl source add github.com/my-project@12345 --branch test --author [email protected] --workflow-name default --workflow-timestamp 2002-10-02T15:00:00Z --from ./my_sbom.json
 ✔ Added Source                                                                          github.com/my-project@12345
 ✔ Imported SBOM                                                                                         /tmp/s.json
Source:
  status:           not-analyzed (active)
  uuid:             fa416998-59fa-44f7-8672-dc267385e799
  source:           github.com/my-project@12345
  vcs:              git
  branch:           test
  workflow:         default
  author:           [email protected]
  1. List the source repositories that you have sent to Anchore Enterprise. This command will allow the operator to list all available source repositories within the system and their current status.
# anchorectl source list
 ✔ Fetched sources
┌──────────────────────────────────────┬────────────┬─────────────────────┬──────────────────────────────────────────┬─────────────────┬───────────────┐
│ UUID                                 │ HOST       │ REPOSITORY          │ REVISION                                 │ ANALYSIS STATUS │ SOURCE STATUS │
├──────────────────────────────────────┼────────────┼─────────────────────┼──────────────────────────────────────────┼─────────────────┼───────────────┤
│ fa416998-59fa-44f7-8672-dc267385e799 │ github.com │ my-project          │ 12345                                    │ analyzed        │ active        │
└──────────────────────────────────────┴────────────┴─────────────────────┴──────────────────────────────────────────┴─────────────────┴───────────────┘
  1. Fetch the uploaded SBOM for a source repository from Anchore Enterprise. The for this command is taken from the UUID(s) of the listed source repositories.
# anchorectl source sbom fa416998-59fa-44f7-8672-dc267385e799 -f /tmp/sbom.json
 ✔ Fetched SBOM
  1. Get detailed information about a source. For example:
# anchorectl source get fa416998-59fa-44f7-8672-dc267385e799
 ✔ Fetched source
Uuid: fa416998-59fa-44f7-8672-dc267385e799
Host: github.com
Repository: my-project
Revision: 12345
Vcs Type: git
Metadata Records:
  - branchName: test
    changeAuthor: [email protected]
    ciWorkflowExecutionTime: "2002-10-02T15:00:00Z"
    ciWorkflowName: default
    uuid: ae5f6617-5ad5-47dd-81ca-8fcb10391fed
Analysis Status: analyzed
Source Status: active
  1. Use anchorectl to investigate vulnerability packages by requesting vulnerabilities for a single analyzed source repository. You can choose os, non-os, or all. For example:
# anchorectl source vulnerabilities fa416998-59fa-44f7-8672-dc267385e799
 ✔ Fetched vulnerabilities                   [48 vulnerabilities]                                                                                                                                                             fa416998-59fa-44f7-8672-dc267385e799
┌─────────────────────┬──────────┬────────────┬─────────┬────────┬──────────────┬──────┬─────────────┬───────────────────────────────────────────────────┐
│ ID                  │ SEVERITY │ NAME       │ VERSION │ FIX    │ WILL NOT FIX │ TYPE │ FEED GROUP  │ URL                                               │
├─────────────────────┼──────────┼────────────┼─────────┼────────┼──────────────┼──────┼─────────────┼───────────────────────────────────────────────────┤
│ GHSA-p6xc-xr62-6r2g │ High     │ log4j-core │ 2.14.1  │ 2.17.0 │ false        │ java │ github:java │ https://github.com/advisories/GHSA-p6xc-xr62-6r2g │
│ GHSA-7rjr-3q55-vv33 │ Critical │ log4j-core │ 2.14.1  │ 2.16.0 │ false        │ java │ github:java │ https://github.com/advisories/GHSA-7rjr-3q55-vv33 │
│ GHSA-8489-44mv-ggj8 │ Medium   │ log4j-core │ 2.14.1  │ 2.17.1 │ false        │ java │ github:java │ https://github.com/advisories/GHSA-8489-44mv-ggj8 │
│ CVE-2021-45105      │ Medium   │ log4j-api  │ 2.14.1  │ None   │ false        │ java │ nvd         │ https://nvd.nist.gov/vuln/detail/CVE-2021-45105   │
...
  1. Use anchorectl to compute a policy evaluation for a source. For example:
# anchorectl source check fa416998-59fa-44f7-8672-dc267385e799
 ✔ Evaluated against policy                  [failed]                                                                                                                                                                         fa416998-59fa-44f7-8672-dc267385e799
Evaluation ID: 3e490750b404eb1b09baf019a4df3942
Source ID: fa416998-59fa-44f7-8672-dc267385e799
Host: github.com
Repository: my-project
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Updated: 2022-08-30T15:58:24Z
Evaluation: fail

(Use -o json option to get more detailed output)

  1. Use anchorectl to delete any individual source repository artifacts from Anchore Enterprise. For example:
# anchorectl source delete fa416998-59fa-44f7-8672-dc267385e799
 ✔ Deleted source
Uuid: fa416998-59fa-44f7-8672-dc267385e799
Host: github.com
Repository: my-project
Revision: 12345
Vcs Type: git
Metadata Records:
  - branchName: test
    changeAuthor: [email protected]
    ciWorkflowExecutionTime: "2002-10-02T15:00:00Z"
    ciWorkflowName: default
    uuid: ae5f6617-5ad5-47dd-81ca-8fcb10391fed
Analysis Status: analyzed
Source Status: deleting

3.7 - ServiceNow

Anchore Enterprise supports integration with the ServiceNow Container Vulnerability Response (CVR) module. This integration allows CVR to collect data from Anchore’s APIs about vulnerabilities and create the associated CVITs. For more information about the ServiceNOW CVR module please consult the formal documentation at ServiceNow’s website

For information about how to integrate ServiceNOW with Anchore and to get access to the plugin for use with your ServiceNow platform, please contact Anchore Support

4 - Configuring Anchore

Configuring Anchore Enterprise starts with configuring each of the core services. Anchore Enterprise deployments using docker compose for trials or Helm for production are designed to run by default with no modifications necessary to get started. Although, many options are available to tune your production deployment to fit your needs.

About Configuring Anchore Enterprise

All system services (except the UI, which has its own configuration) require a single configuration which is read from /config/config.yaml when each service starts up. Settings in this file are mostly related to static settings that are fundamental to the deployment of system services. They are most often updated when the system is being initially tuned for a deployment. They may, infrequently, need to be updated after they have been set as appropriate for any given deployment.

By default, Anchore Enterprise includes a config.yaml that is functional out-of-the-box, with some parameters set to an environment variable for common site-specific settings. These settings are then set either in docker-compose.yaml, by the Helm chart, or as appropriate for other orchestration/deployment tools.

When deploying Anchore Enterprise using the Helm chart, you can configure it by modifying the anchoreConfig section in your values file. This section corresponds to the default config.yaml file included in the Anchore Enterprise container image. The values file serves to override the default configurations and should be modified to suit your deployment.

4.1 - General Configuration

Initial Configuration

A single configuration file config.yaml is required to run Anchore - by default, this file is embedded in the Enterprise container image, located in /config/config.yaml. The default configuration file is provided as a way to get started, which is functional out of the box, without modification, when combined with either the Helm method or docker compose method of installing Enterprise. The default configuration is set up to use environment variable substitutions so that configuration values can be controlled by setting the corresponding environment variables at deployment time (see Using Environment Variables in Anchore.

Each environment variable (starting with ANCHORE_) in the default config.yaml is set (either the baseline as set in the Dockerfile, or an override in docker compose or Helm or through a system default) to ensure that the system comes up with a fully populated configuration.

Some examples of useful initial settings follow.

  • Default admin credentials: A default admin email and password is required to be defined in the catalog service for the initial bootstrap of enterprise to succeed, which are both set through the default config file using the ANCHORE_ADMIN_PASSWORD and ANCHORE_ADMIN_EMAIL environment variables respectively. The system defines a default email admin@myanchore, but does not define a default password. If using the default config file, the user must set a value for ANCHORE_ADMIN_PASSWORD in order to succeed the initial bootstrap of the system. To set the default password or to override the default email, simply add overrides for ANCHORE_ADMIN_PASSWORD and ANCHORE_ADMIN_EMAIL environment variables, set to your preferred values prior to deploying Anchore Enterprise. After the initial bootstrap, this can be removed if desired.
default_admin_password: '${ANCHORE_ADMIN_PASSWORD}'
default_admin_email: '${ANCHORE_ADMIN_EMAIL}'
  • Log level: Anchore Enterprise is configured to run at the INFO log level by default. The full set of options are CRITICAL, ERROR, WARNING, INFO, and DEBUG (in ascending order of log output verbosity). Admin accounts can set the system log level through the dashboard, the dashboard allows differing per-service log levels. Changes through the dashboard do not require a system restart. The log level can also be set globally (across all services) with an override for ANCHORE_LOG_LEVEL prior to deploying Anchore Enterprise, this approach requires a system restart to take effect.
log_level: '${ANCHORE_LOG_LEVEL}'
  • Postgres Database: Anchore Enterprise requires access to a PostgreSQL database to operate. The database can be run as a container with a persistent volume or outside of your container environment (which is set up automatically if the example docker-compose.yaml is used). If you wish to use an external Postgres Database, the elements of the connection string in the config.yaml can be specified as environment variable overrides. The default configuration is set up to connect to a postgres DB that is deployed alongside Anchore Enterprise Services when using docker-compose or Helm, to the internal host anchore-db on port 5432 using username postgres with password mysecretpassword and db postgres. If an external database service is being used then you will need to provide the use, password, host, port and DB name environment variables, as shown below.
db_connect: 'postgresql://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}:${ANCHORE_DB_PORT}/${ANCHORE_DB_NAME}'

Manual Configuration File Override

While Anchore Enterprise is set up to run out of the box without modifications, and many useful values can be overriden using environment variables as described above, one can always opt to have full control over the configuration by providing a config.yaml file explicitly, typically by generating the file and making it available from an external mount/configmap/etc. at deployment time. A good method to start if you wish to provide your own config.yaml is to extract the default config.yaml from the Anchore Enterprise container image, modify it, and then override the embedded /config/config.yaml at deployment time. For example:

  • Extract the default config file from the anchore/enterprise container image:
# docker pull docker.io/anchore/enterprise:v5.X.X
# docker create --name ae docker.io/anchore/enterprise:v5.X.X
# docker cp ae:/config/config.yaml ./my_config.yaml
# docker rm ae
  • Modify the configuration file to your liking.

  • Set up your deployment to override the embedded /config/config.yaml at run time (below example shows how to achieve this with docker compose). Edit the docker-compose.yaml to include a volume mount that mounts your my_config.yaml over the embedded /config/config.yaml, resulting in a volume section for each Anchore Enterprise service definition.

...
  api:
...
    volumes:
     - /path/to/my_config.yaml:/config/config.yaml:z
...
  catalog:
...
    volumes:
     - /path/to/my_config.yaml:/config/config.yaml:z
...
  simpleq:
...
    volumes:
     - /path/to/my_config.yaml:/config/config.yaml:z
...
  policy-engine:
...
    volumes:
     - /path/to/my_config.yaml:/config/config.yaml:z
...
  analyzer:
...
    volumes:
     - /path/to/my_config.yaml:/config/config.yaml:z
...

Now, each service will come up with your external my_config.yaml mounted over the embedded /config/config.yaml.

4.1.1 - API Accessible Configuration

Anchore Enterprise provides the ability to view the majority of system configurations via the API. A select number of configurations can also be modified through the API, or through the System Configuration tab from the UI. This enables control of system behaviours and characteristics in a programmatic way without needing to restart your deployment.

Configuration Provenance

Configuration values are established by a priority system which derives the final running value based on the location from which the value was read. This provenance priority mechanism enables backwards compatibility and increased security for customers wishing tighter control of their systems runtime.

1. Config File Source

Any configuration variable which is declared within an on-disk configuration file will be honored first (ie. via a Helm values.yaml). These configuration variables will be read-only through the API and UI. Any configuration that is currently set for your deployment will be preserved.

2. API Config Source

When a configuration variable is not declared in a config file and is one of the select few that are allowed to be modified via the API, its value will be stored in the system database. These configuration variables can be modified via the API and the changes will be reflected within the deployment within a short time without having to restart your deployment.

The following configuration variables are modifiable through the API:

  • services.analyzer.analyzer_scanner_config.malware.clamav.enabled
  • logging.log_level
  • services.apiext.logging.log_level
  • services.analyzer.logging.log_level
  • services.catalog.logging.log_level
  • services.policy_engine.logging.log_level
  • services.simplequeue.logging.log_level
  • services.notifications.logging.log_level
  • services.data_syncer.logging.log_level
  • services.reports.logging.log_level
  • services.reports_worker.logging.log_level

3. Anchore Enterprise Defaults

If a configuration variable has not been declared by any other source it inherits the default system value. These are values chosen by Anchore as safe and functional defaults.

For the majority of deployments we recommend you simply use the default settings for most values as they have been calibrated to offer the best experience out of the box.

⚠️ Note: These values may change from release to release. Notable changes will be communicated through Release Notes. Significant system behaviour changes will not be performed between minor releases. If you wish to ensure a default is not changed, “pin” the value either by providing it explicitly in the config file or setting it directly via the API (when available).

Blocked by Config File

Attempts to set a configurations through the API may return a blocked_by_config_file error. This error is raised when a configuration variable is not API editable as it is set in a config file.

Locations to inspect for a pinned config value are:

  • the Helm attached values.yaml file (continue reading for further instructions)
  • the compose attached config.yaml file
  • the environment variables being passed into the container

Disabling Helm Chart Configs

For Helm users, to make a value configurable through the API, set the value in the attached values.yaml to "<ALLOW_API_CONFIGURATION>"

As an example:

services:
  analyzer:
      malware:
        clamav:
          enabled: "<ALLOW_API_CONFIGURATION>"

API Configurations Refresh

Each running container will refresh its internal configuration state approximately every 30 seconds. After making a change please allow a minute to elapse for the changes to flush out to all running nodes.

Config File Configurations Refresh

Changes made to a mounted config file will be detected and will cause a system configuration refresh. If a configuration variable that is changed requires a system restart a log message will be printed to that effect. If a system is pending restart, none of the changes will take effect until this is performed. This config file watcher includes a mounted .env file if in use.

Configu Validation

In a future major release of Anchore Enterprise, we will be enabling a strict mode type and data validation for the configuration. This will result in any non-compliant Enterprise deployments failing to boot. For now this is not enforced.

For awareness, validation of your configuration variables and values will occur during system boot. If any configuration fails this validation, a log message mentioning lenient mode will be published. This will be accompanied by details on which entries have failed. To avoid system downtime in a future major release, consider resolving these issues today.

Secret Values

Secret values cannot be read through the API, they will return as the string <redacted> it is however possible to update them. It is not possible to set any secret value to the string <redacted>.

Security Permissions

Config file permissions have not changed, they are editable by the system deployer. API requests to make changes to configuration will only be accepted if coming from an Anchore User with system-admin permissions.

4.1.2 - Environment Variables

Environment variable references may be used in the Anchore config.yaml file to set values that need to be configurable during deployment.

Using this mechanism a common configuration file can be used with multiple Anchore instances with key values being passed using environment variables.

The config.yaml configuration file is read by Anchore and any references to variables prefixed with ANCHORE will be replaced by the value of the matching environment variable.

For example in the sample configuration file the host_id parameter is set be appending the ANCHORE_HOST_ID variable to the string dockerhostid

host_id: 'dockerhostid-${ANCHORE_HOST_ID}'

Notes:

  1. Only variables prefixed with ANCHORE will be replaced
  2. If an environment variable is referenced in the configuration file but not set in the environment then a warning will be logged
  3. It is recommend to use curly braces, for example ${ANCHORE_PARAM} to avoid potentially ambiguous cases

Passing Environment Variables as a File

Environment Variables may also be passed as a file contained key value pairs.

ANCHORE_HOST_ID=myservice1
ANCHORE_LOG_LEVEL=DEBUG

The system will check for an environment variable named ANCHORE_ENV_FILE if this variable is set then Anchore will attempt to read a file at the location specified in this variable.

The Anchore environment file is read before any other Anchore environment variables so any ANCHORE variables passed in the environment will override the values set in the environment file.

4.2 - Enterprise UI Configuration

The Enterprise UI service has some static configuration options that are read from /config/config-ui.yaml inside the UI container image when the system starts up.

The configuration is designed to not require any modification when using the quickstart (docker compose) or production (Helm) methods of deploying Anchore Enterprise. If modifications are desired, the options, their meanings, and environment overrides are listed below for reference:

  • The (required) license_path key specifies the location of the local system folder containing the license.yaml license file required by the Anchore Enterprise UI web service for product activation. This value can be overridden by using the ANCHORE_LICENSE_PATH environment variable.

    license_path: '/'
    
  • The (required) enterprise_uri key specifies the address of the Anchore Enterprise service. The value must be a string containing a properly-formed ‘http’ or ‘https’ URI. This value can be overridden by using the ANCHORE_ENTERPRISE_URI environment variable.

    enterprise_uri: 'http://api:8228/v2'
    
  • The (required) redis_uri key specifies the address of the Redis service. The value must be a string containing a properly-formed ‘http’, ‘https’, or redis URI. Note that the default configuration uses the REdis Serialization Protocol (RESP). This value can be overridden by using the ANCHORE_REDIS_URI environment variable.

    redis_uri: 'redis://ui-redis:6379'
    
  • The (required) appdb_uri key specifies the location and credentials for the postgres DB endpoint used by the UI. The value must contain the host, port, DB user, DB password, and DB name. This value can be overridden by using the ANCHORE_APPDB_URI environment variable.

    appdb_uri: 'postgres://<db-user>:<db-pass>@<db-host>:<db-port>/<db-name>'
    
  • The (required) reports_uri key specifies the address of the Reports service. The value must be a string containing a properly-formed ‘http’ or ‘https’ URI and can be overridden by using the ANCHORE_REPORTS_URI environment variable.

    Note that the presence of an uncommented reports_uri key in this file (even if unset, or set with an invalid value) instructs the Anchore Enterprise UI web service that the Reports feature must be enabled.

    reports_uri: 'http://reports:8228/v2'
    
  • The (optional) enable_ssl key specifies if SSL operations should be enabled within in the web app runtime. When this value is set to True, secure cookies will be used with a SameSite value of None. The value must be a Boolean, and defaults to False if unset.

    Note: Only enable this property if your UI deployment configured to run within an SSL-enabled environment (for example, behind a reverse proxy, in the presence of signed certs etc.)

    This value can be overridden by using the ANCHORE_ENABLE_SSL environment variable.

    enable_ssl: False
    
  • The (optional) enable_proxy key specifies whether to trust a reverse proxy when setting secure cookies (via the X-Forwarded-Proto header). The value must be a Boolean, and defaults to False if unset. In addition, SSL must be enabled for this to work. This value can be overridden by using the ANCHORE_ENABLE_PROXY environment variable.

    enable_proxy: False
    
  • The (optional) allow_shared_login key specifies if a single set of user credentials can be used to start multiple Anchore Enterprise UI sessions; for example, by multiple users across different systems, or by a single user on a single system across multiple browsers.

    When set to False, only one session per credential is permitted at a time, and logging in will invalidate any other sessions that are using the same set of credentials. If this property is unset, or is set to anything other than a Boolean, the web service will default to True.

    Note that setting this property to False does not prevent a single session from being viewed within multiple tabs inside the same browser. This value can be overridden by using the ANCHORE_ALLOW_SHARED_LOGIN environment variable.

    allow_shared_login: True
    
  • The (optional) redis_flushdb key specifies if the Redis datastore containing user session keys and data is emptied on application startup. If the datastore is flushed, any users with active sessions will be required to re-authenticate.

    If this property is unset, or is set to anything other than a Boolean, the web service will default to True. This value can be overridden by using the ANCHORE_REDIS_FLUSHDB environment variable.

    redis_flushdb: True
    
  • The (optional) custom_links key allows a list of up to 10 external links to be provided (additional items will be excluded). The top-level title key provided the label for the menu (if present, otherwise the string “Custom External Links” will be used instead).

    Each link entry must have a title of greater than 0-length and a valid URI. If either item is invalid, the entry will be excluded.

    custom_links:
      title: Custom External Links
      links:
      - title: Example Link 1
        uri: https://example.com
      - title: Example Link 2
        uri: https://example.com
      - title: Example Link 3
        uri: https://example.com
      - title: Example Link 4
        uri: https://example.com
      - title: Example Link 5
        uri: https://example.com
      - title: Example Link 6
        uri: https://example.com
      - title: Example Link 7
        uri: https://example.com
      - title: Example Link 8
        uri: https://example.com
      - title: Example Link 9
        uri: https://example.com
      - title: Example Link 10
        uri: https://example.com
    
  • The (optional) force_websocket key specifies if the WebSocket protocol must be used for socket message communications. By default, long-polling is initially used to establish the handshake between client and web service, followed by a switch to WS if the WebSocket protocol is supported.

    If this value is unset, or is set to anything other than a Boolean, the web service will default to False.

    This value can be overridden by using the ANCHORE_FORCE_WEBSOCKET environment variable.

    force_websocket: False
    
  • The (optional) authentication_lock keys specify if a user should be temporarily prevented from logging in to an account after one or more failed authentication attempts. For this feature to be enabled, both values must be whole numbers greater than 0. They can be overridden by using the ANCHORE_AUTHENTICATION_LOCK_COUNT and ANCHORE_AUTHENTICATION_LOCK_EXPIRES environment variables.

    The count value represents the number of failed authentication attempts allowed to take place before a temporary lock is applied to the username. The expires value represents, in seconds, how long the lock will be applied for.

    Note that, for security reasons, when this feature is enabled it will be applied to any submitted username, regardless of whether the user exists.

    authentication_lock:
      count: 5
        expires: 300
    
  • The (optional) enable_add_repositories key specifies if repositories can be added via the application interface by either administrative users or standard users. In the absence of this key, the default is True. When enabled, this property also suppresses the availability of the Watch Repository toggle associated with any repository entries displayed in the Artifact Analysis view.

    Note that in the absence of one or all of the properties, the default is also True. Thus, this key, and a child key corresponding to an account type (that is itself explicitly set to False) must be set for the feature to be disabled for that account.

    enable_add_repositories:
      admin: True
      standard: True
    
  • The (optional) ldap_timeout and ldap_connect_timeout keys respectively specify the time (in milliseconds) the LDAP client should let operations stay alive before timing out, and the time (in milliseconds) the LDAP client should wait before timing out on TCP connections. Each value must be a whole number greater than 0.

    When these values are unset (or set incorrectly) the app will fall back to using a default value of 6000 milliseconds. The same default is used when the keys are not enabled.

    These value can be overridden by using the ANCHORE_LDAP_AUTH_TIMEOUT and ANCHORE_LDAP_AUTH_CONNECT_TIMEOUT environment variables.

    ldap_timeout: 6000
    ldap_connect_timeout: 6000
    
  • The (optional) custom_message key allows you to provide a message that will be displayed on the application login page below the Username and Password fields. The key value must be an object that contains:

    • A title key, whose string value provides a title for the message—which can be up to 100 characters
    • A message key, whose string value is the message itself—which can be up to 500 characters
    custom_message:
      title:
        "Title goes here..."
      message:
        "Message goes here..."
    

    Note: Both title and message values must be present and contain at least 1 character for the message box to be displayed. If either value exceeds the character limit, the string will be truncated with an ellipsis.

  • The (optional) log_level key allows you to set the descriptive detail of the application log output. The key value must be a string selected from the following priority-ordered list:

    • error
    • warn
    • info
    • http
    • debug

    Once set, each level will automatically include the output for any levels above it—for example, info will include the log output for details at the warn and error details, whereas error will only show error output.

    This value can be overridden by using the ANCHORE_LOG_LEVEL environment variable. When no level is set, either within this configuration file or by the environment variable, a default level of http is used.

    log_level: 'http'
    
  • The (optional) enrich_inventory_view key allows you to set whether the Kubernetes feature should aggregate and include compliance and vulnerability data from the reports service. Setting this key to be False can increase performance on high-volume systems.

    This value can be overridden by using the ANCHORE_ENRICH_INVENTORY_VIEW environment variable. When no flag is set, either within this configuration file or by the environment variable, a default setting of True is used.

    enrich_inventory_view: True
    
  • The (optional) enable_prometheus_metrics key enables exporting monitoring metrics to Prometheus. The metrics are made available on the /metrics endpoint.

    This value can be overridden by using the ANCHORE_ENABLE_METRICS environment variable. When no flag is set, either within this configuration file or by the environment variable, a default setting of False is used.

    enable_prometheus_metrics: False
    

NOTE: The latest default UI configuration file can always be extracted from the Enterprise UI container to review the latest options, environment overrides and descriptions of each option using the following process:

# docker login
# docker pull docker.io/anchore/enterprise-ui:latest
# docker create --name aui docker.io/anchore/enterprise-ui:latest
# docker cp aui:/config/config-ui.yaml /tmp/my-config-ui.yaml
# docker rm aui
# cat /tmp/my-config-ui.yaml
...
...

4.3 - Configuring AnchoreCTL

The anchorectl command can be configured with command-line arguments, environment variables, and/or a configuration file. Typically, a configuration file should be created to set any static configuration parameters (your Anchore Enterprise’s URL, logging behavior, cataloger configurations, etc), so that invocations of the tool only require you to provide command-specific parameters as environment/cli options. However, to fully support stateless scripting, a configuration file is not strictly required (settings can be put in environment/cli options).

Important AnchoreCTL is version-aligned with Anchore Enterprise for major/minor. Please refer to the Enterprise Release Notes for the supported version of AnchoreCTL.

The anchorectl tool will search for an available configuration file using the following search order, until it finds a match:

  1. .anchorectl.yaml
  2. anchorectl.yaml
  3. .anchorectl/config.yaml
  4. ~/.anchorectl.yaml
  5. ~/anchorectl.yaml
  6. $XDG_CONFIG_HOME/anchorectl/config.yaml

Note The anchorectl can also utilize inline Environment Variables which override any configuration file settings.

For the most basic functional invocation of anchorectl, the only required parameters are listed below:

  url: ""        # the URL to the Anchore Enterprise API (env var: "ANCHORECTL_URL")
  username: ""   # the Anchore Enterprise username (env var: "ANCHORECTL_USERNAME")
  password: ""   # the Anchore Enterprise user's login password (env var: "ANCHORECTL_PASSWORD")

For example, with our Docker Compose quickstart deployment of Anchore Enterprise running on your local system, your ~/.anchorectl.yaml would look like the following

  url:      "http://localhost:8228"
  username: "admin"
  password: "yourstrongpassword"

A good way to quickly test that your anchorectl client is ready to use against a deployed and running Anchore Enterprise endpoint is to exercise the system status call, which will display status information fetched from your Enterprise deployment. With ~/.anchorectl.yaml installed and populated correctly, no environment or parameters are required:

anchorectl system status
 ✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 5140       │ 5.14.0       │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 5140       │ 5.14.0       │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 5140       │ 5.14.0       │
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 5140       │ 5.14.0       │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 5140       │ 5.14.0       │
│ data_syncer     │ anchore-quickstart │ http://data-syncer:8228     │ true │ available      | 5140       │ 5.14.0       │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 5140       │ 5.14.0       │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 5140       │ 5.14.0       │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 5140       │ 5.14.0       │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

Congratulations you should now have a working AnchoreCTL.

Using Environment Variables

For some use cases being able to supply inline environment variables can be useful, see the following system status call as an example.

ANCHORECTL_URL="http://localhost:8228" ANCHORECTL_USERNAME="admin" ANCHORECTL_PASSWORD="foobar" anchorectl system status
 ✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 5140        │ 5.14.0        │
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 5140        │ 5.14.0        │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 5140        │ 5.14.0        │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 5140        │ 5.14.0        │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 5140        │ 5.14.0        │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 5140        │ 5.14.0        │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 5140        │ 5.14.0        │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 5140        │ 5.14.0        │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

All the environment variable options can be seen by using anchorectl --help

Using API Keys

If you do not want to expose your private credentials in the configuration file, you can generate an API Key that allows most of the functionality of anchorectl. Please see Generating API Keys

Once you generate the API Key, the UI will give you a key value. You can use this key with the anchorectl configuration:

  url: "http://localhost:8228"
  username: "_api_key"
  password: <API Key Value>

NOTE: API Keys authenticate using HTTP basic auth. The username for API keys has to be _api_key.

Without setting up ~/.anchorectl.yaml or any configuration file, you can interact using environment variables:

Using Distributed Analysis Mode

If you intend to use anchorectl in Distributed Analysis mode, then you’ll need to enable two additional catalogers (secret-search, and file-contents) to mirror the behavior of Anchore Enterprise defaults, when performing an image analysis in Centralized Analysis mode. Below are the ~/.anchorectl.yaml settings to mirror the Anchore Enterprise defaults.

  secret-search:
    cataloger:
      enabled: true
      scope: Squashed
    additional-patterns: {}
    exclude-pattern-names: []
    reveal-values: false
    skip-files-above-size: 10000
  content-search:
    cataloger:
      enabled: false
      scope: Squashed
    patterns: {}
    reveal-values: false
    skip-files-above-size: 10000
  file-contents:
    cataloger:
      enabled: true
      scope: Squashed
    skip-files-above-size: 1048576
    globs: ['/etc/passwd']

For more information on using anchorectl in Distributed Analysis mode, see Concepts: Image Analysis and AnchoreCTL Usage: Images.

Using AnchoreCTL Help & Debug Modes

The anchorectl tool has extensive built-in help information for each command and operation, with many of the parameters allowing for environment overrides. To start with anchorectl, you can run the command with --help to see all the operation sections available:

anchorectl --help

A convenient way to see your changes taking effect is to instruct anchorectl to output DEBUG level logs to the screen using the -vv flag, which will display the full configuration that the tool is using (including the options you set, plus all the defaults and additional configuration file options available).

anchorectl --vv

NOTE: if you would like to capture the full default configuration as displayed when running with -vv, you can paste that output as the contents of your .anchorectl.yaml, and then work with the settings for full control.

If you need any more help, please learn about Verifying Service Health

4.4 - Using the Analysis Archive

As mentioned in concepts, there are two locations for image analysis to be stored:

  • The working set: the standard state after analysis completes. In this location, the image is fully loaded and available for policy evaluation, content, and vulnerability queries.
  • The archive set: a location to keep image analysis data that cannot be used for policy evaluation or queries but can use cheaper storage and less db space and can be reloaded into the working set as needed.

Working with the Analysis Archive

List archived images:

anchorectl archive image list
 ✔ Fetched archive-images
┌─────────────────────────────────────────────────────────────────────────┬────────────────────────┬──────────┬──────────────┬──────────────────────┐
│ IMAGE DIGEST                                                            │ TAGS                   │ STATUS   │ ARCHIVE SIZE │ ANALYZED AT          │
├─────────────────────────────────────────────────────────────────────────┼────────────────────────┼──────────┼──────────────┼──────────────────────┤
│ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ docker.io/nginx:latest │ archived │ 1.4 MB       │ 2022-08-23T21:08:29Z │
└─────────────────────────────────────────────────────────────────────────┴────────────────────────┴──────────┴──────────────┴──────────────────────┘

To add an image to the archive, use the digest. All analysis, policy evaluations, and tags will be added to the archive. NOTE: this does not remove it from the working set. To fully move it you must first archive and then delete image in the working set using AnchoreCTL or the API directly.

Archiving Images

Archiving an image analysis creates a snapshot of the image’s analysis data, policy evaluation history, and tags and stores in a different storage location and different record location than working set images.


# anchorectl image list
 ✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────┬────────┐
│ TAG                                                   │ DIGEST                                                                  │ ANALYSIS │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────┼────────┤
│ docker.io/ubuntu:latest                               │ sha256:33bca6883412038cc4cbd3ca11406076cf809c1dd1462a144ed2e38a7e79378a │ analyzed │ active │
│ docker.io/ubuntu:latest                               │ sha256:42ba2dfce475de1113d55602d40af18415897167d47c2045ec7b6d9746ff148f │ analyzed │ active │
│ docker.io/localimage:latest                           │ sha256:74c6eb3bbeb683eec0b8859bd844620d0b429a58d700ea14122c1892ae1f2885 │ analyzed │ active │
│ docker.io/nginx:latest                                │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────┴────────┘

# anchorectl archive image add sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
 ✔ Added image to archive
┌─────────────────────────────────────────────────────────────────────────┬──────────┬────────────────────────┐
│ DIGEST                                                                  │ STATUS   │ DETAIL                 │
├─────────────────────────────────────────────────────────────────────────┼──────────┼────────────────────────┤
│ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ archived │ Completed successfully │
└─────────────────────────────────────────────────────────────────────────┴──────────┴────────────────────────┘

Then to delete it in the working set (optionally):

NOTE: You may need to use –force if the image is the newest of its tags and has active subscriptions_


# anchorectl image delete sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc  --force
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST                                                                  │ STATUS   │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘

At this point the image in the archive only.

Restoring images from the archive into the working set

This will not delete the archive entry, only add it back to the working set. Restore and image to working set from archive:


# anchorectl archive image restore sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
 ✔ Restore image
┌────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────┬────────┐
│ TAG                    │ DIGEST                                                                  │ ANALYSIS │ STATUS │
├────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────┼────────┤
│ docker.io/nginx:latest │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ analyzed │ active │
└────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────┴────────┘

To view the restored image:


# anchorectl image get sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
Tag: docker.io/nginx:latest
Digest: sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
ID: 2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
Analysis: analyzed
Status: active

Working with Archive rules

As with all AnchoreCTL commands, the --help option will show the arguments, options and descriptions of valid values.

List existing rules:


# anchorectl archive rule list
 ✔ Fetched rules
┌──────────────────────────────────┬────────────┬──────────────┬────────────────────┬────────────┬─────────┬───────┬──────────────────┬──────────────┬─────────────┬──────────────────┬────────┬──────────────────────┐
│ ID                               │ TRANSITION │ ANALYSIS AGE │ TAG VERSIONS NEWER │ REGISTRY   │ REPO    │ TAG   │ REGISTRY EXCLUDE │ REPO EXCLUDE │ TAG EXCLUDE │ EXCLUDE EXP DAYS │ GLOBAL │ LAST UPDATED         │
├──────────────────────────────────┼────────────┼──────────────┼────────────────────┼────────────┼─────────┼───────┼──────────────────┼──────────────┼─────────────┼──────────────────┼────────┼──────────────────────┤
│ 2ca9284202814f6aa41916fd8d21ddf2 │ archive    │ 90d          │ 90                 │ *          │ *       │ *     │                  │              │             │ -1               │ false  │ 2022-08-19T17:58:38Z │
│ 6cb4011b102a4ba1a86a5f3695871004 │ archive    │ 90d          │ 90                 │ foobar.com │ myimage │ mytag │ barfoo.com       │ *            │ *           │ -1               │ false  │ 2022-08-22T18:47:32Z │
└──────────────────────────────────┴────────────┴──────────────┴────────────────────┴────────────┴─────────┴───────┴──────────────────┴──────────────┴─────────────┴──────────────────┴────────┴──────────────────────┘

Add a rule:


anchorectl archive rule add --transition archive --analysis-age-days 90 --tag-versions-newer 1 --selector-registry 'docker.io' --selector-repository 'library/*' --selector-tag 'latest'
 ✔ Added rule
ID: 0031546b9ce94cf0ae0e60c0f35b9ea3
Transition: archive
Analysis Age: 90d
Tag Versions Newer: 1
Selector:
  Registry: docker.io
  Repo: library/*
  Tag: latest
Exclude:
  Selector:
    Registry Exclude:
    Repo Exclude:
    Tag Exclude:
  Exclude Exp Days: -1
Global: false
Last Updated: 2022-08-24T22:57:51Z

The required parameters are: minimum age of analysis in days, number of tag versions newer, and the transition to use.

There is also an optional --system-global flag available for admin account users that makes the rule apply to all accounts in the system.

As a non-admin user you can see global rules but you cannot update/delete them (will get a 404):


# ANCHORECTL_USERNAME=test1user ANCHORECTL_PASSWORD=password ANCHORECTL_ACCOUNT=test1acct anchorectl archive rule list
 ✔ Fetched rules
┌──────────────────────────────────┬────────────┬──────────────┬────────────────────┬───────────┬───────────┬────────┬──────────────────┬──────────────┬─────────────┬──────────────────┬────────┬──────────────────────┐
│ ID                               │ TRANSITION │ ANALYSIS AGE │ TAG VERSIONS NEWER │ REGISTRY  │ REPO      │ TAG    │ REGISTRY EXCLUDE │ REPO EXCLUDE │ TAG EXCLUDE │ EXCLUDE EXP DAYS │ GLOBAL │ LAST UPDATED         │
├──────────────────────────────────┼────────────┼──────────────┼────────────────────┼───────────┼───────────┼────────┼──────────────────┼──────────────┼─────────────┼──────────────────┼────────┼──────────────────────┤
│ 16dc38cef54e4ce5ac87d00e90b4a4f2 │ archive    │ 90d          │ 1                  │ docker.io │ library/* │ latest │                  │              │             │ -1               │ true   │ 2022-08-24T23:01:05Z │
└──────────────────────────────────┴────────────┴──────────────┴────────────────────┴───────────┴───────────┴────────┴──────────────────┴──────────────┴─────────────┴──────────────────┴────────┴──────────────────────┘

# ANCHORECTL_USERNAME=test1user ANCHORECTL_PASSWORD=password ANCHORECTL_ACCOUNT=test1acct anchorectl archive rule delete 16dc38cef54e4ce5ac87d00e90b4a4f2
 ⠙ Deleting rule
error: 1 error occurred:
	* unable to delete rule:
{
  "detail": {
    "error_codes": []
  },
  "httpcode": 404,
  "message": "Rule not found"
}

# ANCHORECTL_USERNAME=test1user ANCHORECTL_PASSWORD=password ANCHORECTL_ACCOUNT=test1acct anchorectl archive rule get 16dc38cef54e4ce5ac87d00e90b4a4f2
 ✔ Fetched rule
ID: 16dc38cef54e4ce5ac87d00e90b4a4f2
Transition: archive
Analysis Age: 90d
Tag Versions Newer: 1
Selector:
  Registry: docker.io
  Repo: library/*
  Tag: latest
Exclude:
  Selector:
    Registry Exclude:
    Repo Exclude:
    Tag Exclude:
  Exclude Exp Days: -1
Global: true
Last Updated: 2022-08-24T23:01:05Z

Delete a rule:

# anchorectl archive rule delete 16dc38cef54e4ce5ac87d00e90b4a4f2
 ✔ Deleted rule
No results

4.5 - Artifact Lifecycle Policies

Artifact Lifecycle Policies are instruction sets which perform lifecycle events on certain types of artifacts.

Each policy can perform an action on a given artifact_type based on configured policy_conditions (rules/selectors).

As an example, a system administrator may create an Artifact Lifecycle Policy that will automatically delete any image that has an analysis date older than 180 days.

WARNING ⚠️

  • ⚠️ These policies have the ability to delete data without archive/backup. Proceed with caution!
  • ⚠️ These policies are GLOBAL they will impact every account on the system.
  • ⚠️ These policies can only be created and managed by a system administrator.

Policy Components

Artifact Lifecycle Policies are global policies that will execute on a schedule defined by a cycle_timer within the catalog service. services.catalog.cycle_timers.artifact_lifecycle_policy_tasks has a default time of every 12 hours.

The policy is constructed with the following parameters:

  • Artifacts Types - The type of artifacts the policy will consider. The current supported type is image.

  • Inclusion Rules - The set of criteria which will be used to determine the set of artifacts to work on.
    All criteria must be satisfied for the policy to enact on an artifact.

    • days_since_analyzed
      • Selects artifacts whose analyzed_at date is n days old.
      • If this value is set to less than zero, this rule is disabled.
      • An artifact that has not been analyzed, either because it failed analysis or the analysis is pending, will not be included.
    • even_if_exists_in_runtime_inventory
      • When true, an artifact will be included even if it exists in the Runtime Inventory.
      • When false, an artifact will not be included if it exists in the Runtime Inventory. Essentially protecting artifacts found in your runtime inventory. Please review the Inventory Time-To-Live for information on how to prune the Runtime Inventory.
    • include_base_images
      • When true, images that have ancestral children will be included.
      • When false, images that have ancestral children will not be included.
      • Note: These are evaluated per run. As children are deleted, a previously excluded parent image may too become eligible for deletion.
  • Policy Actions - After the policy determines a set of artifacts that satisfy the Inclusion Rules, this is the action which will be performed on them. The current supported action is delete. Actioned artifacts will have a matching system Event created for audit and notification purposes.

Policy Interaction

If more than one policy is enabled, each policy will work independently, using its set of rules to determine if any artifacts satisfy its criteria. Each policy will apply its action on the set of artifacts.

Creating a new Artifact Lifecycle Policy

Due to the potentially destructive nature of these policies every parameter must be explicitly declared when creating a new policy. This means all policy rules must be explicitly configured or explicitly disabled.

# anchorectl system artifact-lifecycle-policy add --action=delete --artifact-type=image --name="example policy" --description=example --enabled=false --days-since-analyzed=30 --even-if-exists-in-runtime=true
 ✔ Added artifact-lifecycle-policy
Name: example lifecycle policy
Policy Conditions:
  - artifactType: image
    daysSinceAnalyzed: 30
    evenIfExistsInRuntimeInventory: true
    includeBaseImages: false
    version: 1
Uuid: 73226831-9140-4d27-a922-4a61e43dbb0d
Action: delete
Deleted At:
Enabled: false
Updated At: 2023-11-22T13:38:49Z
Created At: 2023-11-22T13:38:49Z
Description: example

Updating an existing Artifact Lifecycle Policy

# anchorectl system artifact-lifecycle-policy update 5620b641-a25f-4b1f-966c-929281a41e16 --action=delete --name=example --artifact-type=image --days-since-analyzed=60 --even-if-exists-in-runtime=false
 ✔ Update artifact-lifecycle-policy
Name: example
Policy Conditions:
  - artifactType: image
    daysSinceAnalyzed: 60
    evenIfExistsInRuntimeInventory: false
    includeBaseImages: false
    version: 2
Uuid: 5620b641-a25f-4b1f-966c-929281a41e16
Action: delete
Deleted At:
Enabled: false
Updated At: 2023-11-22T13:58:04Z
Created At: 2023-11-22T13:02:24Z
Description: test description

Enabling the Artifact Lifecycle Policy

# anchorectl system artifact-lifecycle-policy update 5620b641-a25f-4b1f-966c-929281a41e16 --action=delete --name=example --artifact-type=image --days-since-analyzed=60 --even-if-exists-in-runtime=false --enable=true
 ✔ Update artifact-lifecycle-policy
Name: example
Policy Conditions:
  - artifactType: image
    daysSinceAnalyzed: 60
    evenIfExistsInRuntimeInventory: false
    includeBaseImages: false
    version: 2
Uuid: 5620b641-a25f-4b1f-966c-929281a41e16
Action: delete
Deleted At:
Enabled: true
Updated At: 2023-11-22T13:58:04Z
Created At: 2023-11-22T13:02:24Z
Description: test description

List Artifact Lifecycle Policies

anchorectl system artifact-lifecycle-policy list
 ✔ Fetched artifact-lifecycle-policies
Items:
  - action: delete
    createdAt: "2023-11-22T13:02:24Z"
    description: example description
    enabled: true
    name: "example policy"
    policyConditions:
      - artifactType: image
        daysSinceAnalyzed: 1
        evenIfExistsInRuntimeInventory: true
        includeBaseImages: false
        version: 2
    updatedAt: "2023-11-22T13:02:24Z"
    uuid: 5620b641-a25f-4b1f-966c-929281a41e16

Get specific Artifact Lifecycle Policy

Note: it is possible to request “deleted” policies through this API for audit reasons. The deleted_at field will be null, and enabled will be true if the policy is active.

anchorectl system artifact-lifecycle-policy get 5620b641-a25f-4b1f-966c-929281a41e16
 ✔ Fetched artifact-lifecycle-policy
Name: 2023-11-22T13:02:24.621Z
Policy Conditions:
  - artifactType: image
    daysSinceAnalyzed: 1
    evenIfExistsInRuntimeInventory: true
    includeBaseImages: false
    version: 1
Uuid: 5620b641-a25f-4b1f-966c-929281a41e16
Action: delete
Deleted At:
Enabled: true
Updated At: 2023-11-22T13:02:24Z
Created At: 2023-11-22T13:02:24Z
Description: test description

Delete a policy

Note: for the purposes of audit the policy will still remain in the system. It will be disabled and marked deleted. This will effectively make it hidden unless explicitly requested by its UUID through the API.

# anchorectl system artifact-lifecycle-policy delete 73226831-9140-4d27-a922-4a61e43dbb0d
 ✔ Deleted artifact-lifecycle-policy
No results

4.6 - Content Hints

For an overview of the content hints and overrides features, see the feature overview

Enabling Content Hints

This feature is disabled by default to ensure that images may not exercise this feature without the admin’s explicit approval.

In each analyzer’s config.yaml file (by default at /config/config.yaml):

Set the enable_hints: true setting in the analyzer service section of config.yaml.

If using the default config.yaml included in the image, you may instead set an environment variable (e.g for use in our provided config for Docker Compose):

ANCHORE_HINTS_ENABLED=true environment variable for the analyzer service.

For Helm: see the Helm installation instructions for enabling the hints file mechanism when deploying with Helm.

4.7 - Using Dashboard

Overview

The Dashboard is your configurable landing page where insights into the collective status of your container image environment can be displayed through various widgets. Utilizing the Enterprise Reporting Service, the widgets are hydrated with metrics which are generated and updated on a cycle, the duration of which is determined by application configuration.

Note: Because the reporting data cycle is configurable, the results shown in this view may not precisely reflect actual analysis output at any given time.

For more information on how to modify this cycle or the Reporting Service in general, please refer to the Reporting Service.

The following sections in this document describe how to add widgets to the dashboard and how to customize the dashboard layout to your preference.

Widgets

Adding a Widget

To add a new widget, click the Add New Widget button present in the Dashboard view. Or, if no widgets are defined, click the Let’s add one! button shown.

Upon doing so, a modal will appear with several properties described below:

PropertyDescription
NameThe name shown within the Widget’s header.
Mode‘Compact’ a widget to keep data easily digestable at a glance. ‘Expand’ to view how your data has evolved over a configurable time period.
CollectionThe collection of tags you’re interested in. Toggle to view metrics for all tags - including historical ones.
Time Series SettingsThe time period you wish to view metrics for within the expanded mode.
TypeThe category of information such as ‘Vulnerabilities’ or ‘Policy Evaluations’ which denotes what metrics are capable of being shown.
MetricsThe list of metrics available based on Type.

Once you enter the required properties and click OK, the widget will be created and any metrics needed to hydrate your Dashboard will be fetched and updated.

Note: All fields except Type are editable by clicking the button shown on the top right of the header when hovering over a widget.

Viewing Results

The Reporting Service at its core aggregates data on resources across accounts and generates metrics. These metrics, in turn, fuel the Dashboard’s mission to provide actionable items straight to the user - that’s you!

Leverage these results to dive into the exact reason for a failed policy evaluation or the cause and fix of a critical vulnerability.

Vulnerabilities

Vulnerabilities are grouped and colored by severity. Critical, High, and Medium vulnerabilities are included by default but you can toggle which ones are relevant to your interests by clicking the button.

Clicking one of these metrics navigates you to a view (shown below) where you can browse the filtered set of vulnerabilities matching that severity.

For more info on a particular vulnerability, click on its corresponding button visible in the Links column. To view the exact tags affected, drill down to a specific repository by expanding the arrows ().

View that tag’s in-depth analysis by clicking on the value within its Image Digest column.

Policy Evaluations

Policy Evaluations are grouped by their evaluation outcome such as Pass or Fail and can be further filtered by the reason for that result. All reasons are included by default but as with other widget properties, they can be edited by clicking the button.

Clicking one of these results navigates you to a view (shown below) where you can browse the affected and filtered set of tags.

Dig down to a specific tag by expanding the arrow () shown on the left side of the row.

Navigate using the Image Digest value to view even more info such as the specific policy being triggered and what exactly is triggering it. If you’re interested in viewing the contents of your policy, click on the Policy ID value.

Dashboard Configuration

After populating your Dashboard with various widgets, you can easily modify the layout by using some methods explained below:

Click this icon shown in the top right or the header of a widget to Drag and Drop it into a new location.

Click this icon shown in the top right of a widget to Expand it and include a graphical representation of how your data has evolved over a time period of your choice.

Click this icon shown in the top right of a widget to Compact it into an easily digestable view of the metrics you’re interested in.

Click this icon shown in the top right of a widget to Delete it from the dashboard.

4.8 - Data Synchronization

Introduction

In this section, you’ll learn how Anchore Enterprise ingests the data used for analysis and vulnerability management.

Enterprise manages four datasets:

  • Vulnerability Database (grypedb)
  • ClamAV Malware Database
  • CISA KEV (Known Exploited Vulnerabilities)
  • EPSS (Exploit Prediction Scoring System)

Included about the requirements for running the data syncer service. You can read more about how Feeds works in the feature overview.

Requirements

Network Ingress

The following two FQDNs need to be allowlisted in your network to allow the Data Syncer Service to communicate with the Anchore Data Service:

https://data.anchore-enterprise.com
https://s3.us-west-2.amazonaws.com/enterprise-data-service.production.anchore.io

Ideally the endpoints can be whitelisted via a layer 7/proxy. If you require IP ACLs for whitelisting, the endpoints are within the AWS us-west-2 S3 & Global Cloudfront IP space (see https://docs.aws.amazon.com/vpc/latest/userguide/aws-ip-ranges.html).

The following can be used to gather the IP ranges:

curl -s https://ip-ranges.amazonaws.com/ip-ranges.json | \
jq -r '.prefixes[] | select(.region=="us-west-2" and .service=="S3") | .ip_prefix'

curl -s https://ip-ranges.amazonaws.com/ip-ranges.json | jq -r '.prefixes[] | select(.service=="CLOUDFRONT") | .ip_prefix' | sort

4.8.1 - Data Syncer Configuration

Dataset Synchronization Interval

The Data Syncer Service will check every hour if there is new data available from the Anchore Data Service. If it finds a new dataset then it will sync it down immediately. It will also trigger the Policy Engine Service to reprocess the data to make it available for policy evaluations. The analyzer checks the data syncer for a new ClamAV Malware signature database before every malware scan (if enabled).

Controlling Which Feeds and Groups are Synced

During initial data sync, you can always query the progress and status of the feed sync using anchorectl.

# anchorectl feed list
 ✔ List feed                                     
┌────────────────────────────────────────────┬────────────────────┬─────────┬──────────────────────┬──────────────┐
│ FEED                                       │ GROUP              │ ENABLED │ LAST UPDATED         │ RECORD COUNT │
├────────────────────────────────────────────┼────────────────────┼─────────┼──────────────────────┼──────────────┤
│ ClamAV Malware Database                    │ clamav_db          │ true    │ 2024-09-26T13:13:50Z │ 1            │
│ Vulnerabilities                            │ github:composer    │ true    │ 2024-09-26T12:14:50Z │ 4036         │
│ Vulnerabilities                            │ github:dart        │ true    │ 2024-09-26T12:14:50Z │ 8            │
│ Vulnerabilities                            │ github:gem         │ true    │ 2024-09-26T12:14:50Z │ 817          │
│ Vulnerabilities                            │ github:go          │ true    │ 2024-09-26T12:14:50Z │ 1875         │
│ Vulnerabilities                            │ github:java        │ true    │ 2024-09-26T12:14:50Z │ 5058         │
│ Vulnerabilities                            │ github:npm         │ true    │ 2024-09-26T12:14:50Z │ 15586        │
│ Vulnerabilities                            │ github:nuget       │ true    │ 2024-09-26T12:14:50Z │ 624          │
│ Vulnerabilities                            │ github:python      │ true    │ 2024-09-26T12:14:50Z │ 3226         │
.
.
.
│ CISA KEV (Known Exploited Vulnerabilities) │ kev_db             │ true    │ 2024-09-26T13:13:47Z │ 1181         │
| Exploit Prediction Scoring System Database │ epss_db            │ true    │ 2024-11-18T18:04:12Z │ 266565       │
└────────────────────────────────────────────┴────────────────────┴─────────┴──────────────────────┴──────────────┘

Using the Config File to Include/Exclude Feeds and Package Types when scanning for vulnerabilities

With the feed service removed, Enterprise no longer supports excluding certain providers and package types from the vulnerability feed. To ensure the same experience when using the product, you can now exclude certain providers and package types from matching vulnerabilities.

Using Helm

In your values.yaml file set the following:

policy_engine:
   vulnerabilities:
      matching:
        exclude:
          providers: ["rhel","debian"]
          package_types: ["rpm"]

Using Docker Compose

In your config.yaml file set the following:

services:
  policy_engine:
    vulnerabilities:
      matching:
        exclude:
          providers: ["rhel","debian"]
          package_types: ["rpm"]

Further information can be found in Vulnerability Management.

4.8.2 - Data Synchronization

When Anchore Enterprise runs, the Data Syncer Service will begin to synchronize security feed data from the Anchore Data Service.

CVE data for Linux distributions such as Alpine, CentOS, Debian, Oracle, Red Hat and Ubuntu will be downloaded. The initial sync typically take anywhere from 1-5 minutes depending on your environment and network speed. After that the Data Syncer Service will check every hour if there is new data available from the Anchore Data Service. If it finds a new dataset then it will sync it down immediately.

For air-gapped environments, please see the Air-Gapped documentation.

Checking Feed Status

Feed information can be retrieved through the API and AnchoreCTL.

# anchorectl feed list
 ✔ List feed                                                                                                                                                                                       
┌────────────────────────────────────────────┬────────────────────┬─────────┬──────────────────────┬──────────────┐
│ FEED                                       │ GROUP              │ ENABLED │ LAST UPDATED         │ RECORD COUNT │
├────────────────────────────────────────────┼────────────────────┼─────────┼──────────────────────┼──────────────┤
│ ClamAV Malware Database                    │ clamav_db          │ true    │ 2024-09-26T13:13:50Z │ 1            │
│ Vulnerabilities                            │ github:composer    │ true    │ 2024-09-26T12:14:50Z │ 4036         │
│ Vulnerabilities                            │ github:dart        │ true    │ 2024-09-26T12:14:50Z │ 8            │
│ Vulnerabilities                            │ github:gem         │ true    │ 2024-09-26T12:14:50Z │ 817          │
│ Vulnerabilities                            │ github:go          │ true    │ 2024-09-26T12:14:50Z │ 1875         │
│ Vulnerabilities                            │ github:java        │ true    │ 2024-09-26T12:14:50Z │ 5058         │
│ Vulnerabilities                            │ github:npm         │ true    │ 2024-09-26T12:14:50Z │ 15586        │
│ Vulnerabilities                            │ github:nuget       │ true    │ 2024-09-26T12:14:50Z │ 624          │
│ Vulnerabilities                            │ github:python      │ true    │ 2024-09-26T12:14:50Z │ 3226         │
│ Vulnerabilities                            │ github:rust        │ true    │ 2024-09-26T12:14:50Z │ 804          │
│ Vulnerabilities                            │ github:swift       │ true    │ 2024-09-26T12:14:50Z │ 32           │
│ Vulnerabilities                            │ msrc:10378         │ true    │ 2024-09-26T12:14:49Z │ 2668         │
│ Vulnerabilities                            │ msrc:10379         │ true    │ 2024-09-26T12:14:49Z │ 2645         │
│ Vulnerabilities                            │ msrc:10481         │ true    │ 2024-09-26T12:14:49Z │ 1951         │
│ Vulnerabilities                            │ msrc:10482         │ true    │ 2024-09-26T12:14:49Z │ 2028         │
│ Vulnerabilities                            │ msrc:10483         │ true    │ 2024-09-26T12:14:49Z │ 2822         │
│ Vulnerabilities                            │ msrc:10484         │ true    │ 2024-09-26T12:14:49Z │ 1934         │
│ Vulnerabilities                            │ msrc:10543         │ true    │ 2024-09-26T12:14:49Z │ 2796         │
│ Vulnerabilities                            │ msrc:10729         │ true    │ 2024-09-26T12:14:49Z │ 2908         │
│ Vulnerabilities                            │ msrc:10735         │ true    │ 2024-09-26T12:14:49Z │ 3006         │
│ Vulnerabilities                            │ msrc:10788         │ true    │ 2024-09-26T12:14:49Z │ 466          │
│ Vulnerabilities                            │ msrc:10789         │ true    │ 2024-09-26T12:14:49Z │ 437          │
│ Vulnerabilities                            │ msrc:10816         │ true    │ 2024-09-26T12:14:49Z │ 3328         │
│ Vulnerabilities                            │ msrc:10852         │ true    │ 2024-09-26T12:14:49Z │ 3043         │
│ Vulnerabilities                            │ msrc:10853         │ true    │ 2024-09-26T12:14:49Z │ 3167         │
│ Vulnerabilities                            │ msrc:10855         │ true    │ 2024-09-26T12:14:49Z │ 3300         │
│ Vulnerabilities                            │ msrc:10951         │ true    │ 2024-09-26T12:14:49Z │ 716          │
│ Vulnerabilities                            │ msrc:10952         │ true    │ 2024-09-26T12:14:49Z │ 766          │
│ Vulnerabilities                            │ msrc:11453         │ true    │ 2024-09-26T12:14:49Z │ 1240         │
│ Vulnerabilities                            │ msrc:11454         │ true    │ 2024-09-26T12:14:49Z │ 1290         │
│ Vulnerabilities                            │ msrc:11466         │ true    │ 2024-09-26T12:14:49Z │ 395          │
│ Vulnerabilities                            │ msrc:11497         │ true    │ 2024-09-26T12:14:49Z │ 1454         │
│ Vulnerabilities                            │ msrc:11498         │ true    │ 2024-09-26T12:14:49Z │ 1514         │
│ Vulnerabilities                            │ msrc:11499         │ true    │ 2024-09-26T12:14:49Z │ 981          │
│ Vulnerabilities                            │ msrc:11563         │ true    │ 2024-09-26T12:14:49Z │ 1344         │
│ Vulnerabilities                            │ msrc:11568         │ true    │ 2024-09-26T12:14:49Z │ 2993         │
│ Vulnerabilities                            │ msrc:11569         │ true    │ 2024-09-26T12:14:49Z │ 3095         │
│ Vulnerabilities                            │ msrc:11570         │ true    │ 2024-09-26T12:14:49Z │ 2975         │
│ Vulnerabilities                            │ msrc:11571         │ true    │ 2024-09-26T12:14:49Z │ 3266         │
│ Vulnerabilities                            │ msrc:11572         │ true    │ 2024-09-26T12:14:49Z │ 3238         │
│ Vulnerabilities                            │ msrc:11583         │ true    │ 2024-09-26T12:14:49Z │ 1038         │
│ Vulnerabilities                            │ msrc:11644         │ true    │ 2024-09-26T12:14:49Z │ 1054         │
│ Vulnerabilities                            │ msrc:11645         │ true    │ 2024-09-26T12:14:49Z │ 1089         │
│ Vulnerabilities                            │ msrc:11646         │ true    │ 2024-09-26T12:14:49Z │ 1055         │
│ Vulnerabilities                            │ msrc:11647         │ true    │ 2024-09-26T12:14:49Z │ 1074         │
│ Vulnerabilities                            │ msrc:11712         │ true    │ 2024-09-26T12:14:49Z │ 1442         │
│ Vulnerabilities                            │ msrc:11713         │ true    │ 2024-09-26T12:14:49Z │ 1491         │
│ Vulnerabilities                            │ msrc:11714         │ true    │ 2024-09-26T12:14:49Z │ 1447         │
│ Vulnerabilities                            │ msrc:11715         │ true    │ 2024-09-26T12:14:49Z │ 999          │
│ Vulnerabilities                            │ msrc:11766         │ true    │ 2024-09-26T12:14:49Z │ 912          │
│ Vulnerabilities                            │ msrc:11767         │ true    │ 2024-09-26T12:14:49Z │ 915          │
│ Vulnerabilities                            │ msrc:11768         │ true    │ 2024-09-26T12:14:49Z │ 940          │
│ Vulnerabilities                            │ msrc:11769         │ true    │ 2024-09-26T12:14:49Z │ 934          │
│ Vulnerabilities                            │ msrc:11800         │ true    │ 2024-09-26T12:14:49Z │ 382          │
│ Vulnerabilities                            │ msrc:11801         │ true    │ 2024-09-26T12:14:49Z │ 1277         │
│ Vulnerabilities                            │ msrc:11802         │ true    │ 2024-09-26T12:14:49Z │ 1277         │
│ Vulnerabilities                            │ msrc:11803         │ true    │ 2024-09-26T12:14:49Z │ 981          │
│ Vulnerabilities                            │ msrc:11896         │ true    │ 2024-09-26T12:14:49Z │ 792          │
│ Vulnerabilities                            │ msrc:11897         │ true    │ 2024-09-26T12:14:49Z │ 762          │
│ Vulnerabilities                            │ msrc:11898         │ true    │ 2024-09-26T12:14:49Z │ 763          │
│ Vulnerabilities                            │ msrc:11923         │ true    │ 2024-09-26T12:14:49Z │ 1733         │
│ Vulnerabilities                            │ msrc:11924         │ true    │ 2024-09-26T12:14:49Z │ 1726         │
│ Vulnerabilities                            │ msrc:11926         │ true    │ 2024-09-26T12:14:49Z │ 1536         │
│ Vulnerabilities                            │ msrc:11927         │ true    │ 2024-09-26T12:14:49Z │ 1503         │
│ Vulnerabilities                            │ msrc:11929         │ true    │ 2024-09-26T12:14:49Z │ 1433         │
│ Vulnerabilities                            │ msrc:11930         │ true    │ 2024-09-26T12:14:49Z │ 1429         │
│ Vulnerabilities                            │ msrc:11931         │ true    │ 2024-09-26T12:14:49Z │ 1474         │
│ Vulnerabilities                            │ msrc:12085         │ true    │ 2024-09-26T12:14:49Z │ 1044         │
│ Vulnerabilities                            │ msrc:12086         │ true    │ 2024-09-26T12:14:49Z │ 1053         │
│ Vulnerabilities                            │ msrc:12097         │ true    │ 2024-09-26T12:14:49Z │ 964          │
│ Vulnerabilities                            │ msrc:12098         │ true    │ 2024-09-26T12:14:49Z │ 939          │
│ Vulnerabilities                            │ msrc:12099         │ true    │ 2024-09-26T12:14:49Z │ 943          │
│ Vulnerabilities                            │ nvd                │ true    │ 2024-09-26T12:14:58Z │ 263831       │
│ Vulnerabilities                            │ alpine:3.10        │ true    │ 2024-09-26T12:13:37Z │ 2321         │
│ Vulnerabilities                            │ alpine:3.11        │ true    │ 2024-09-26T12:13:37Z │ 2659         │
│ Vulnerabilities                            │ alpine:3.12        │ true    │ 2024-09-26T12:13:37Z │ 3193         │
│ Vulnerabilities                            │ alpine:3.13        │ true    │ 2024-09-26T12:13:37Z │ 3684         │
│ Vulnerabilities                            │ alpine:3.14        │ true    │ 2024-09-26T12:13:37Z │ 4265         │
│ Vulnerabilities                            │ alpine:3.15        │ true    │ 2024-09-26T12:13:37Z │ 4815         │
│ Vulnerabilities                            │ alpine:3.16        │ true    │ 2024-09-26T12:13:37Z │ 5271         │
│ Vulnerabilities                            │ alpine:3.17        │ true    │ 2024-09-26T12:13:37Z │ 5630         │
│ Vulnerabilities                            │ alpine:3.18        │ true    │ 2024-09-26T12:13:37Z │ 6144         │
│ Vulnerabilities                            │ alpine:3.19        │ true    │ 2024-09-26T12:13:37Z │ 6338         │
│ Vulnerabilities                            │ alpine:3.2         │ true    │ 2024-09-26T12:13:37Z │ 305          │
│ Vulnerabilities                            │ alpine:3.20        │ true    │ 2024-09-26T12:13:37Z │ 6428         │
│ Vulnerabilities                            │ alpine:3.3         │ true    │ 2024-09-26T12:13:37Z │ 470          │
│ Vulnerabilities                            │ alpine:3.4         │ true    │ 2024-09-26T12:13:37Z │ 679          │
│ Vulnerabilities                            │ alpine:3.5         │ true    │ 2024-09-26T12:13:37Z │ 902          │
│ Vulnerabilities                            │ alpine:3.6         │ true    │ 2024-09-26T12:13:37Z │ 1075         │
│ Vulnerabilities                            │ alpine:3.7         │ true    │ 2024-09-26T12:13:37Z │ 1461         │
│ Vulnerabilities                            │ alpine:3.8         │ true    │ 2024-09-26T12:13:37Z │ 1671         │
│ Vulnerabilities                            │ alpine:3.9         │ true    │ 2024-09-26T12:13:37Z │ 1955         │
│ Vulnerabilities                            │ alpine:edge        │ true    │ 2024-09-26T12:13:37Z │ 6466         │
│ Vulnerabilities                            │ amzn:2             │ true    │ 2024-09-26T12:13:34Z │ 2280         │
│ Vulnerabilities                            │ amzn:2022          │ true    │ 2024-09-26T12:13:34Z │ 276          │
│ Vulnerabilities                            │ amzn:2023          │ true    │ 2024-09-26T12:13:34Z │ 736          │
│ Vulnerabilities                            │ chainguard:rolling │ true    │ 2024-09-26T12:13:19Z │ 4462         │
│ Vulnerabilities                            │ debian:10          │ true    │ 2024-09-26T12:14:52Z │ 32021        │
│ Vulnerabilities                            │ debian:11          │ true    │ 2024-09-26T12:14:52Z │ 33497        │
│ Vulnerabilities                            │ debian:12          │ true    │ 2024-09-26T12:14:52Z │ 32452        │
│ Vulnerabilities                            │ debian:13          │ true    │ 2024-09-26T12:14:52Z │ 31631        │
│ Vulnerabilities                            │ debian:7           │ true    │ 2024-09-26T12:14:52Z │ 20455        │
│ Vulnerabilities                            │ debian:8           │ true    │ 2024-09-26T12:14:52Z │ 24058        │
│ Vulnerabilities                            │ debian:9           │ true    │ 2024-09-26T12:14:52Z │ 28240        │
│ Vulnerabilities                            │ debian:unstable    │ true    │ 2024-09-26T12:14:52Z │ 35913        │
│ Vulnerabilities                            │ mariner:1.0        │ true    │ 2024-09-26T12:14:41Z │ 2092         │
│ Vulnerabilities                            │ mariner:2.0        │ true    │ 2024-09-26T12:14:41Z │ 2624         │
│ Vulnerabilities                            │ ol:5               │ true    │ 2024-09-26T12:14:44Z │ 1255         │
│ Vulnerabilities                            │ ol:6               │ true    │ 2024-09-26T12:14:44Z │ 1709         │
│ Vulnerabilities                            │ ol:7               │ true    │ 2024-09-26T12:14:44Z │ 2196         │
│ Vulnerabilities                            │ ol:8               │ true    │ 2024-09-26T12:14:44Z │ 1906         │
│ Vulnerabilities                            │ ol:9               │ true    │ 2024-09-26T12:14:44Z │ 870          │
│ Vulnerabilities                            │ rhel:5             │ true    │ 2024-09-26T12:14:59Z │ 7193         │
│ Vulnerabilities                            │ rhel:6             │ true    │ 2024-09-26T12:14:59Z │ 11121        │
│ Vulnerabilities                            │ rhel:7             │ true    │ 2024-09-26T12:14:59Z │ 11359        │
│ Vulnerabilities                            │ rhel:8             │ true    │ 2024-09-26T12:14:59Z │ 6998         │
│ Vulnerabilities                            │ rhel:9             │ true    │ 2024-09-26T12:14:59Z │ 4039         │
│ Vulnerabilities                            │ sles:11            │ true    │ 2024-09-26T12:14:47Z │ 594          │
│ Vulnerabilities                            │ sles:11.1          │ true    │ 2024-09-26T12:14:47Z │ 6125         │
│ Vulnerabilities                            │ sles:11.2          │ true    │ 2024-09-26T12:14:47Z │ 3291         │
│ Vulnerabilities                            │ sles:11.3          │ true    │ 2024-09-26T12:14:47Z │ 7081         │
│ Vulnerabilities                            │ sles:11.4          │ true    │ 2024-09-26T12:14:47Z │ 6583         │
│ Vulnerabilities                            │ sles:12            │ true    │ 2024-09-26T12:14:47Z │ 6018         │
│ Vulnerabilities                            │ sles:12.1          │ true    │ 2024-09-26T12:14:47Z │ 6205         │
│ Vulnerabilities                            │ sles:12.2          │ true    │ 2024-09-26T12:14:47Z │ 8339         │
│ Vulnerabilities                            │ sles:12.3          │ true    │ 2024-09-26T12:14:47Z │ 10396        │
│ Vulnerabilities                            │ sles:12.4          │ true    │ 2024-09-26T12:14:47Z │ 10215        │
│ Vulnerabilities                            │ sles:12.5          │ true    │ 2024-09-26T12:14:47Z │ 12444        │
│ Vulnerabilities                            │ sles:15            │ true    │ 2024-09-26T12:14:47Z │ 8737         │
│ Vulnerabilities                            │ sles:15.1          │ true    │ 2024-09-26T12:14:47Z │ 9245         │
│ Vulnerabilities                            │ sles:15.2          │ true    │ 2024-09-26T12:14:47Z │ 9572         │
│ Vulnerabilities                            │ sles:15.3          │ true    │ 2024-09-26T12:14:47Z │ 10074        │
│ Vulnerabilities                            │ sles:15.4          │ true    │ 2024-09-26T12:14:47Z │ 10436        │
│ Vulnerabilities                            │ sles:15.5          │ true    │ 2024-09-26T12:14:47Z │ 10880        │
│ Vulnerabilities                            │ sles:15.6          │ true    │ 2024-09-26T12:14:47Z │ 3775         │
│ Vulnerabilities                            │ ubuntu:12.04       │ true    │ 2024-09-26T12:15:12Z │ 14934        │
│ Vulnerabilities                            │ ubuntu:12.10       │ true    │ 2024-09-26T12:15:12Z │ 5641         │
│ Vulnerabilities                            │ ubuntu:13.04       │ true    │ 2024-09-26T12:15:12Z │ 4117         │
│ Vulnerabilities                            │ ubuntu:14.04       │ true    │ 2024-09-26T12:15:12Z │ 37910        │
│ Vulnerabilities                            │ ubuntu:14.10       │ true    │ 2024-09-26T12:15:12Z │ 4437         │
│ Vulnerabilities                            │ ubuntu:15.04       │ true    │ 2024-09-26T12:15:12Z │ 6220         │
│ Vulnerabilities                            │ ubuntu:15.10       │ true    │ 2024-09-26T12:15:12Z │ 6489         │
│ Vulnerabilities                            │ ubuntu:16.04       │ true    │ 2024-09-26T12:15:12Z │ 35057        │
│ Vulnerabilities                            │ ubuntu:16.10       │ true    │ 2024-09-26T12:15:12Z │ 8607         │
│ Vulnerabilities                            │ ubuntu:17.04       │ true    │ 2024-09-26T12:15:12Z │ 9094         │
│ Vulnerabilities                            │ ubuntu:17.10       │ true    │ 2024-09-26T12:15:12Z │ 7900         │
│ Vulnerabilities                            │ ubuntu:18.04       │ true    │ 2024-09-26T12:15:12Z │ 29533        │
│ Vulnerabilities                            │ ubuntu:18.10       │ true    │ 2024-09-26T12:15:12Z │ 8367         │
│ Vulnerabilities                            │ ubuntu:19.04       │ true    │ 2024-09-26T12:15:12Z │ 8634         │
│ Vulnerabilities                            │ ubuntu:19.10       │ true    │ 2024-09-26T12:15:12Z │ 8414         │
│ Vulnerabilities                            │ ubuntu:20.04       │ true    │ 2024-09-26T12:15:12Z │ 25271        │
│ Vulnerabilities                            │ ubuntu:20.10       │ true    │ 2024-09-26T12:15:12Z │ 9974         │
│ Vulnerabilities                            │ ubuntu:21.04       │ true    │ 2024-09-26T12:15:12Z │ 11304        │
│ Vulnerabilities                            │ ubuntu:21.10       │ true    │ 2024-09-26T12:15:12Z │ 12628        │
│ Vulnerabilities                            │ ubuntu:22.04       │ true    │ 2024-09-26T12:15:12Z │ 23527        │
│ Vulnerabilities                            │ ubuntu:22.10       │ true    │ 2024-09-26T12:15:12Z │ 14483        │
│ Vulnerabilities                            │ ubuntu:23.04       │ true    │ 2024-09-26T12:15:12Z │ 15562        │
│ Vulnerabilities                            │ ubuntu:23.10       │ true    │ 2024-09-26T12:15:12Z │ 18431        │
│ Vulnerabilities                            │ ubuntu:24.04       │ true    │ 2024-09-26T12:15:12Z │ 19537        │
│ Vulnerabilities                            │ wolfi:rolling      │ true    │ 2024-09-26T12:14:43Z │ 2867         │
│ Vulnerabilities                            │ anchore:exclusions │ true    │ 2024-09-26T12:14:43Z │ 12851        │
│ CISA KEV (Known Exploited Vulnerabilities) │ kev_db             │ true    │ 2024-09-26T13:13:47Z │ 1181         │
| Exploit Prediction Scoring System Database │ epss_db            │ true    │ 2024-11-18T18:04:12Z │ 266565       │
└────────────────────────────────────────────┴────────────────────┴─────────┴──────────────────────┴──────────────┘

This command will report list the feeds synchronized by Anchore Enterprise, last sync time and current record count.

Note: Time is reported as UTC, not local time.

Manually initiating feed sync

You can initiate a manual sync of the latest datasets which tells the Data Syncer Service to download the latest feed data from the Anchore Data Service.

# anchorectl feed sync
 ✔ Synced feeds

This will also inform the policy-engine to sync down the new dataset if the Data Syncer Service has successfully downloaded the latest data.

Forcing a full resync

If there is a scenario where you want the Data Syncer Service to force download the latest datasets and overwrite the existing data, you can use the --force_sync flag.

#  ./anchorectl feed sync --force_sync
 ✔ Synced feeds            

4.8.3 - Air-Gapped

As of v5.10, AnchoreCTL is now capable of importing and exporting feeds. AnchoreCTL will be downloading the datasets from the Anchore Data Service and then importing them into the Anchore Enterprise deployment. For more detail regarding the Anchore Data Service, please see Anchore Data Service.

Air Gap Flow

Configuration

To configure your Anchore Enterprise deployment to work in an air-gapped environment, you will need to disable the Data Syncer Service’s automatic feed sync.

For helm

Set the following in your values.yaml

dataSyncer:
  extraEnv:
    - name: ANCHORE_DATA_SYNC_AUTO_SYNC_ENABLED
      value: "false"

For docker-compose

Set the following in your docker compose yaml file

services:
  data-syncer:
    environment:
      - ANCHORE_DATA_SYNC_AUTO_SYNC_ENABLED=false

Auto Sync Disabled Log

To confirm auto sync is disabled the following log will be emitted by the data-syncer service upon startup:

[INFO] [anchore_enterprise.services.data_syncer.service/handle_data_sync():33] | Auto sync is disabled. Skipping data sync.

Downloading and Importing Datasets

Once installed, AnchoreCTL can be used to download the latest feed data from the Anchore Data Service. This data can then be moved across the air gap and uploaded into your Anchore Enterprise deployment.

Downloading the Datasets

Run the following command outside your air-gapped environment to download the datasets

Using your license key
anchorectl airgap feed download -f <filename> -k <your api key>
Using your license file
anchorectl airgap feed download -f <filename> -l <path to your license file>
  • To get your API key, check your license file for a field called apiKey
  • This command will download all the feeds from the hosted service to the file specified by ‘-f’.
  • This command can take a bit of time to return depending on your connection speed.
  • The resulting file will be approximately 0.5 GB in size as of this writing but will continue to grow as more data is added to the feeds.

Importing the Datasets

Take a copy of this file and move it into your air-gapped environment. Then run the following command to import the feeds into your Anchore Enterprise deployment.

anchorectl airgap feed upload -f <filename>

Your Analyzer Service and Policy Engine Service will now be able to fetch the latest data from the Data Syncer Service as normal. This procedure must be repeated each time you want to update the datasets in your air gapped environment.

4.8.4 - Status Page

A live status page is available for real-time updates on the Anchore Data Service, including information on outages, maintenance, and options for subscribing to notifications. You can access the status page at https://status.anchore-enterprise.com

Status Page

The status page is updated in real-time and provides information on the following:

  • Current Status - The current status of the Anchore Data Service.
  • Incidents - A list of any ongoing incidents that may be affecting the Anchore Data Service.
  • Scheduled Maintenance - A list of any upcoming maintenance windows that may affect the Anchore Data Service.
  • Subscribe to Updates - Options for subscribing to updates via email, SMS, or webhook.
  • Past Incidents - A list of past incidents that have affected the Anchore Data Service.
  • Historical Uptime - Historical uptime data for the Anchore Data Service.

4.9 - Integration

Overview

Anchore Enterprise exposes an API through which external software entities like inventory agents can report their health status. These software entities (agents, plugins etc.) serve as a mechanism for external systems like Kubernetes clusters or container image repositories to integrate with Anchore Enterprise.

Enterprise refer these software entities simply as integrations. A deployed Kubernetes Inventory agent is example of an integration instance. A deployed Kubernetes Admission Controller is another example of an integration instance.

Integration health reporting

An integration instance can send health reports to Anchore Enterprise, which in turn maintains a record of those health reports. They are used by Enterprise to determine the status of the integration instances.

An integration instance can send a health report at most every 30 seconds. To prevent unlimited growth of stored health reports, Anchore’s Catalog Service will prune old health reports.

The configuration setting below allow you to specify how long health reports should be kept by Catalog Service. This is the default setting found in the values file.

services:
  catalog:
    integrations:
      integration_health_report_ttl_days: 2

4.10 - Malware & Cataloger Scans

Malware & Cataloger Scanning Overview

When an Image is Analyzed/Scanned you have the ability to configure the process to best suit your particular use case and/or desired security control. After discovery these data can later be used within Anchore’s policy engine rules and gates. Please don’t forget to review this configuration too.

Both the Malware and Catalogers offer new capabilities and details on these are as follows:

Malware

For an overview of the feature and how it works. See Malware Scanning

Catalogers

During Analysis/Scans of your images, Anchore has the ability to run extra catalogers or searches. These are as follows:

  • retrieve_files - retrieve and index files matching a configured file list
  • secret_search and content_search - perform a search across file contents for a configured regexp match. Findings are then cataloged accordingly.

Limitations and Resource Usage

Both the Malware and Catalogers will impact analysis/scanning time, and this time will depend on the size and number of files the image contains. Anchore supports sources. However, sources currently need to be analyzed with Syft and not AnchoreCTL. Syft does not currently support catalogers or malware checks. Where possible, and use case depending, you should offload to Distributed Scanning/Analysis to reduce analyzer compute load on your central Anchore Deployment.

Malware

  • Files in an image which are greater than 2GB will be skipped due to a limitation in ClamAV. Any skipped file will be identified with a Malware Signature as ANCHORE.FILE_SKIPPED.MAX_FILE_SIZE_EXCEEDED.
  • Malware scanning can ONLY operate when using Centralized Analysis and NOT Distributed Analysis.

Catalogers

Running extra catalogers will require more resources and time to perform analysis of images. Please take this into consideration when enabling and defining your regexp values. This can be controlled by limiting the search with MAXFILESIZE to limit the search to large and/or very small files.

Enabling & Disabling Malware Scans & Catalogers

The process for enabling and configuring the Malware and other catalogers differs between Helm and Compose deployments. Additionally, there are two modes which you scan/anaylsis images and therefore two places that can configure this capability in 1. Distributed Mode 2. Centralized mode For Distributed Analysis, the catalogers are configured in the AnchoreCTL Configuration. For Centralized Analysis, the catalogers are configured in the centralized Anchore Deployment via the Analyzer config documented on this page.

Helm

Update the Helm values.yaml file. Below is an example configuration with Malware, retrieve_files, secret_search enabled. Helm will take these values and define a ConfigMap in your Anchore Kubernetes deployment.

:warning: Malforming this file can cause the Anchore Analyzer to fail on all image analysis.

anchoreConfig:
  analyzer:
    malware:
        configFile:
          retrieve_files:
            file_list:
              - '/etc/passwd'
          secret_search:
            match_params:
              - MAXFILESIZE=10000
            regexp_match:
              - "AWS_ACCESS_KEY=(?i).*aws_access_key_id( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9]).*"
              - "AWS_SECRET_KEY=(?i).*aws_secret_access_key( *=+ *).*(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=]).*"
              - "PRIV_KEY=(?i)-+BEGIN(.*)PRIVATE KEY-+"
              - "DOCKER_AUTH=(?i).*\"auth\": *\".+\""
              - "API_KEY=(?i).*api(-|_)key( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20,60}(?![A-Z0-9]).*"
              # - "ALPINE_NULL_ROOT=^root:::0:::::$"
          #
          ## Uncomment content_search: {} to configure file content searching
          # Very expensive operation - recommend you carefully test and review
          # content_search:
          #   match_params:
          #     - MAXFILESIZE=10000
          #   regexp_match:
          #     - "EXAMPLE_MATCH="
          #
          ## Malware scanning occurs only at analysis time when the image content itself is available
          malware:
            clamav:
              # Set to true to enable the malware scan
              enabled: true
              # Set to true to enable the db refresh on each scan
              db_update_enabled: true
              # Maximum time in milliseconds that ClamAV scan is allowed to run (default is 30 minutes)
              max_scan_time: 1800000

Please review the helm chart example values.yaml file for further detail.

Docker Compose

The Malware and Catalogers can be configured and enabled in the ‘analyzer_config.yaml’ file. This file needs to then be mounted as a file volume in your Anchore Docker Compose file under the analyzer: service as shown below:

analyzer:
    volumes:
      - ./analyzer_config.yaml:/anchore_service/analyzer_config.yaml:ro   #mounted analyzer_config

This file should contain the required configuration parameters. Please see the following example and adjust as required.

malware:
  clamav:
    # Set this to true to enable the malware scan
    enabled: true
    # Set this to false to turn off the db refresh on each scan
    db_update_enabled: true

retrieve_files:
  max_file_size_kb: 1000
  file_list:
    - '/etc/passwd'
    - '/etc/services'
    - '/etc/sudoers'

secret_search:
  match_params:
    - MAXFILESIZE=10000
  regexp_match:
    - "AWS_ACCESS_KEY=(?i).*aws_access_key_id( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9]).*"
    - "AWS_SECRET_KEY=(?i).*aws_secret_access_key( *=+ *).*(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=]).*"
    - "PRIV_KEY=(?i)-+BEGIN(.*)PRIVATE KEY-+"
    - "DOCKER_AUTH=(?i).*\"auth\": *\".+\""
    - "API_KEY=(?i).*api(-|_)key( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20,60}(?![A-Z0-9]).*"

## Uncomment content_search: {} to configure file content searching
# Very expensive operation - recommend you carefully test and review
# content_search:
#   match_params:
#     - MAXFILESIZE=10000
#   regexp_match:
#     - "EXAMPLE_MATCH="

Malware - Disabling DB Updates

The db_update_enabled property of the malware.clamav object shown above in the analyzer_config.yaml controls whether the analyzer will ask the data syncer for the latest ClamAV database before each analysis execution. By default, it is enabled and should be left on for up-to-date scan results. The db version is returned in the metadata section of the scan results available from the Anchore Enterprise API.

You can disable the update if you want to mount an external volume to provide the db data in /home/anchore/clamav/db inside the container (must be read-write for the Anchore user) This can be used to cache or share a db across multiple analyzers (e.g. using AWS EFS) or to support air-gapped deployments where the db cannot be automatically updated from deployment itself.

Malware - Advanced Configuration

The path for the db and db update configuration are also available as environment variables inside the analyzer containers. These should not need to be used in most cases, but for air-gapped or other installation where the default configuration is not sufficient they are available for customization.

NameDescriptionDefault
ANCHORE_CLAMAV_DB_DIRLocation of the db dir to read/write/home/anchore/clamav/db

For most cases, Anchore uses the default values for the clamscan invocations. If you would like to override any of the default values of those commands or replace existing ones, you can add the following to the analyzer_config.yaml:

malware:
  clamav:
    clamscan_args:
      - max-filesize=1000m
      - max-scansize=1000m

Please note that the value above will be passed directly to the corresponding commands, e.g.:

clamscan --suppress-ok-results --infected --recursive --allmatch --archive-verbose --tempdir={tempdir} --database={database} --max-filesize=1000m --max-scansize=1000m <path_to_tar>

4.11 - Max Image Size

Max Compressed Image Size

Anchore Enterprise can be configured to have a size limit for images being added for analysis. Images that exceed the configured maximum size will not be added to Anchore and the catalog service will log an error message providing details of the failure. This size limit is applied when adding images to Anchore Enterprise via AnchoreCTL, tag subscriptions, and repository watchers.

By default the max_compressed_image_size_mb feature is disabled.

It can be enabled via the max_compressed_image_size_mb property in the Anchore Enterprise configuration file or by using the ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB env variable.

  • When a value greater than zero is supplied, the value represents the size limit in MB of the compressed image.
  • When a value less than zero is supplied, it will disable the feature and allow images of any size to be added to Anchore.
  • A value of 0 will prevent any images from being added.
  • Finally, non-integer values will cause bootstrap of the service to fail.

If using Docker Compose with the default config, this can be set through the ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB env variable on the catalog service. If using Helm, it can be configurated by using the values file and adding the ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB env variable to the catalog.extraEnv property.

4.12 - Custom Certificate Authority

If a custom CA certificate is required to access an external resource then the Trust Store in Anchore needs to be propagated to the following locations:

  1. The Operating System provided trust store.
  2. The Python Certifi trust store.
  3. NodeJS runtime for the UI.

When might you need to add a CA Cert to Anchore Enterprise?

  1. Using an SSL terminating network proxy in your Anchore deployment environment.
    • Anchore needs to be able to reach external https endpoints from vulnerability feeds to container registries.
  2. Using a Container Registry with self-signed certificate or custom CA.
    • You can update the trust store OR use the –insecure option when configuring the registry in Anchore.

The operating system trust store is read by the skopeo utility (the tool used to interact with container registries) and python requests library that is used to access container registries to read manifests and pull image layers.

Adding your certificate(s)

Approach 1

The first approach is centred around creating a new Anchore Enterprise image and inserting the CA certs into the right places. You might need to perform this for both the Anchore Enterprise COre and Anchore Enterprise UI image.

The following Dockerfile illustrates an example of how this general process can be automated to produce your own container with a new custom CA cert installed.

1. Create Dockerfile

Example Dockerfile updating the certifi trust store for the Python Anchore Enterprise Image

FROM docker.io/anchore/enterprise:v5.X.X
USER root:root
COPY ./custom-ca.pem /home/anchore/venv/lib/python3.11/site-packages/certifi/
# This is to verify the CA's are in the correct format
RUN openssl crl2pkcs7 -nocrl -certfile /home/anchore/venv/lib/python3.11/site-packages/certifi/custom-ca.pem | openssl pkcs7 -print_certs -noout
COPY ./custom-ca.pem /etc/pki/ca-trust/source/anchors/
RUN update-ca-trust && trust list
RUN /usr/bin/cat /home/anchore/venv/lib/python3.11/site-packages/certifi/custom-ca.pem >> /home/anchore/venv/lib/python3.11/site-packages/certifi/cacert.pem
USER anchore:anchore

We suggest adding an indicator to the resulting image name that designates it as being custom built. Ex: enterprise:v5.X.X-custom

2. Build Custom Image using Dockerfile

sudo docker build -t anchore/enterprise:v5.X.Xcustom .

You will need to perform this on each build, store the new enterprise image in a private registry and update your Helm or Compose deployment to use the new image reference.

This approach is about injecting the secrets/ca certs into the containers at runtime and therefore doesn’t require a new image to be built.

Docker Compose

Enterprise

The entrypoint of the enterprise container will enumerate certificates mounted at /home/anchore/certs, combine them with its built-in CAs and populate them all via the environment variables:

  • REQUESTS_CA_BUNDLE
  • SSL_CERT_DIR

The above should configure the running services to use the custom CA(s). Please note that doing an exec into a container may not include the /docker-entrypoint.sh.

For all services using the enterprise container:

  volumes:
    - ./ldap-combined-ca-cert-bundle.pem:/home/anchore/certs/ldap-combined-ca-cert-bundle.pem:ro

UI

Simply supply your own custom certificate(s) as environment variable(s) and volume mount(s). The example below is supplying a custom CA for use with LDAP in the Anchore Enterprise UI image.

ui:
  image: docker.io/anchore/enterprise-ui:v5.9.0
  volumes:
    - ./license.yaml:/license.yaml:ro
    - ./config-ui.yaml:/config/config-ui.yaml:z
    - ./ldap-combined-ca-cert-bundle.pem:/home/anchore/certs/ldap-combined-ca-cert-bundle.pem:ro
  environment:
    - NODE_EXTRA_CA_CERTS=/home/anchore/certs/ldap-combined-ca-cert-bundle.pem

Helm

For Helm deployments, first create the Kubernetes secret that will store your cert(s). The example below is supplying multiple certs in a custom-ca-cert secret and anchore K8s namespace.

kubectl create secret generic custom-ca-cert --from-file=ldap-ca-cert.pem=./ldap-ca-cert.pem --from-file=db-ssl-ca-cert.pem=./db-ssl-ca-cert.pem -n anchore

Ensure you have the CA cert secret in the same namespace as your deployment. (eg. -n anchore)

Now update your Helm values file to reference your secret, and CA cert file

certStoreSecretName: "custom-ca-cert"
ui:
  ldapsRootCaCertName: "ldap-ca-cert.pem"
anchoreConfig:
  database:
    ssl: true
    sslRootCertFileName: "db-ssl-ca-cert.pem"

Please note there are other certs you can supply and configure anchoreConfig.internalServicesSSL & anchoreConfig.keys.privateKeyFileName

Additional Background

Operating System

To add a certificate to the operating system trust store the CA certificate should be placed in the /etc location that is appropriate for the container image being used.

  • For Anchore 4.5.X and newer, the base container is Red Hat Universal Base Image 9.X, which stores certs in /etc/pki/ca-trust/source/anchors/ and requires user to run update-ca-trust command as root to update the system certs.

Anchore Enterprise UI - Node.js

The Anchore Enterprise UI is powered by Node.js and as such, when the UI makes calls to external services such as LDAP it might require a certificate. Please note that Node.js can also pull certificates from the Operating System store.

  • Anchore Enterpise loads the certificate into the NODE_EXTRA_CA_CERTS environment variable

Anchore Enterprise - Python

Certifi is a curated list of trusted certificate authorities that is used by the Python requests HTTP client library. The Python requests library is used by Anchore for all HTTP interactions, including when communicating with Anchore Feed service, when webhooks are sent to a TLS enabled endpoint and inbetween Anchore services if TLS has been configured. To update the Certifi trust store the CA certificate should be appended onto the cacert.pem file provided by the Certifi library.

  • For Enterprise 5.1.x and newer, Python was upgraded to python 3.11, certifi’s cacert.pem is installed in /home/anchore/venv/lib/python3.11/site-packages/certifi/cacert.pem

Debugging

How to know if you need a custom cert?

Have a proxy or custom CA in place? Can’t ignore self-signed certs? Then yes.

Ask your IT / Infrastructure Team, otherwise you can test the connections from your Anchore deployment/server to the service in question.

curl https://myregistry.example.com (if you see ssl verify errors then you might require a custom ca)

If you are able to ignore self-signed certs, you can do this for Container Registries in Anchore

Fetch, test and use the Custom Cert

If you have identified that you need to add a custom CA cert into Anchore. You can run the following to fetch and test the certificate before redeploying Anchore.

# fetch the cert
openssl s_client -showcerts -servername myregistry.example.com -connect myregistry.example.com:443 > cacert.pem
# test the cert
curl -v --cacert=cacert.pem myregistry.example.com

You can take this certificate and add this to your Anchore deployment as described above.

4.13 - Network Proxies

As covered in the Network sections of the requirements document, Anchore requires three categories of network connectivity.

  • Registry Access Network connectivity, including DNS resolution, to the registries from which Anchore needs to download images.

  • Feed Service Anchore synchronizes feed data such as operating system vulnerabilities (CVEs) from Anchore Cloud Service. See Feeds Overview for the full list of endpoints.

  • Access to Anchore Internal Services Anchore is composed of six smaller services that can be deployed in a single container or scaled out to handle load. Each Anchore service should be able to connect the other services over the network.

In environments were access to the public internet is restricted then a proxy server may be required to allow Anchore to connect to Anchore Cloud Feed Service or to a publicly hosted container registry.

Anchore can be configured to access a proxy server by using environment variables that are read by Anchore at run time.

  • https_proxy: Address of the proxy service to use for HTTPS traffic in the following form: {PROTOCOL}://{IP or HOSTNAME}:{PORT} eg. https://proxy.corp.example.com:8128

  • http_proxy:
    Address of the proxy service to use for HTTP traffic in the following form: {PROTOCOL}://{IP or HOSTNAME}:{PORT}
    eg. http://proxy.corp.example.com:8128

  • no_proxy:
    Comma delimited list of hostnames or IP address which should be accessed directly without using the proxy service. eg. localhost,127.0.0.1,registry,example.com

Environment Variables to Control Proxy Behavior

  • Setting the endpoints to HTTP proxy:
    • Set both HTTP_PROXY and http_proxy environment variables for regular HTTP protocol use.
    • Set both HTTPS_PROXY and https_proxy environment variables for HTTP + TLS (HTTPS) protocol use.
  • Setting endpoints to exclude from proxy use:
    • Set both NO_PROXY and no_proxy environment variables to exclude those domains from proxy use defined in the preceding proxy configurations.

If using Docker Compose these need to be set in each service entry.

If using Helm Chart, set these in the extraEnv entry for each service.

Notes:

  • Do not use double quotes (") around the proxy variable values.

Authentication

For proxy servers that require authentication the username and password can be provided as part of the URL:

eg. https_proxy=https://user:[email protected]:8128

If the username or password contains and non-url safe characters then these should be urlencoded.

For example:

The password F@oBar! would be encoded as F%40oBar%21

Setting Environment Variables

Docker Compose: https://docs.docker.com/compose/environment-variables/

An example - You would need these entries for each service. Please tweak the example for your environment.

environment:
  - http_proxy=http://my-proxy.domain:8080
  - https_proxy=http://my-proxy.domain:8080
  - no_proxy=localhost, localhost.localdomain, 127.0.0.1, analyzer, anchore-db, api, catalog, notifications, policy-engine, queue, reports, reports-worker, ui-redis, ui, data-syncer, swagger-ui, prometheus, *.my-registry.domain
  - HTTP_PROXY=http://my-proxy.domain:8080
  - HTTPS_PROXY=http://my-proxy.domain:8080
  - NO_PROXY=localhost, localhost.localdomain, 127.0.0.1, analyzer, anchore-db, api, catalog, notifications, policy-engine, queue, reports, reports-worker, ui-redis, ui, data-syncer, swagger-ui, prometheus, *.my-registry.domain

Kubernetes: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/

An example - You can set this in your Helm values.yaml file at the global level. Please tweak the example for your environment.

extraEnv:
    - name: http_proxy
      value: http://my-proxy.domain:8080
    - name: https_proxy
      value: http://my-proxy.domain:8080
    - name: no_proxy
      value: 'localhost, 127.0.0.1, cluster.local, anchore-enterprise-api, anchore-enterprise-catalog, anchore-enterprise-datasyncer, anchore-enterprise-notifications, anchore-enterprise-policy, anchore-enterprise-reports, anchore-enterprise-reportsworker, anchore-enterprise-reportsworker, anchore-enterprise-ui, anchore-postgresql, anchore-ui-redis-headless, anchore-ui-redis-master, anchore-postgresql-hl'
    - name: HTTP_PROXY
      value: http://my-proxy.domain:8080
    - name: HTTPS_PROXY
      value: http://my-proxy.domain:8080
    - name: NO_PROXY
      value: 'localhost, 127.0.0.1, cluster.local, anchore-enterprise-api, anchore-enterprise-catalog, anchore-enterprise-datasyncer, anchore-enterprise-notifications, anchore-enterprise-policy, anchore-enterprise-reports, anchore-enterprise-reportsworker, anchore-enterprise-reportsworker, anchore-enterprise-ui, anchore-postgresql, anchore-ui-redis-headless, anchore-ui-redis-master, anchore-postgresql-hl'

Deployment Architecture Notes

When setting up a network proxy, keep in mind that you will need to explicitly allow inter-service communication within the Anchore Enterprise deployment to bypass the proxy, and potentially other hostnames as well (e.g. internal registries) to ensure that traffic is directed correctly. In general, all Anchore Enterprise service endpoints (the URLs for enabled services in the output of an ‘anchorectl system status’ command) as well as any internal registries (the hostnames you may have set up with ‘anchorectl registry add –username …’ or as part of an un-credentialed image add ‘anchorectl image add registry:port/….’), should not be proxied (i.e. added to the no_proxy list, as described above).

If you wish to tune this further, below is a list of each component that makes an external URL fetch for various purposes:

  • Catalog: makes connections to image registries (any host added via ‘anchorectl registry add’ or directly via ‘anchorectl image add’)
  • Analyzer: same as catalog
  • Data Syncer: by default connects to Anchore Data Service for downloading vulnerability datasets. See Data Feeds and Data Synchronization for more details.

4.14 - TLS / SSL

The following sections describe how to configure TLS for Anchore API and/or Anchore Enterprise Services. Please note that the UI service currently does not support listening via TLS.

Internal TLS for Anchore Enterprise Services Helm deployments

graph TB
    subgraph External Traffic
        user[User]
        kubernetes[Ingress Controller]
    end

    subgraph Services with Internal TLS
        api[API]
        policy[Policy]
        catalog[Catalog]
        analyzer[Analyzer]
        reports[Reports]
        reportsworker[Reports Worker]
        notifications[Notifications]
        dataSyncer[Data Syncer]
        simplequeue[Simple Queue]
    end

    subgraph Services Without TLS
        ui[UI]
    end

    user -- HTTP/HTTPS --> kubernetes
    kubernetes -- HTTPS --> api
    kubernetes -- HTTP --> ui

    api <--> policy
    api <--> catalog
    api <--> analyzer
    api <--> reports
    api <--> reportsworker
    api <--> notifications
    api <--> dataSyncer
    api <--> simplequeue

    policy <--> catalog
    catalog <--> analyzer
    analyzer <--> reports
    reports <--> reportsworker
    reportsworker <--> notifications
    notifications <--> dataSyncer
    dataSyncer <--> simplequeue

The following will configure Anchore Enterprise internal services to communicate with one another via TLS.

This script is provided as a demonstration of how to generate certificates (generate-anchore-tls-certs.sh):

#!/bin/bash
export NAMESPACE="$1";
export RELEASE="$2";
export SANS="DNS:*.${NAMESPACE}.svc.cluster.local,DNS:${RELEASE}-enterprise-api,DNS:${RELEASE}-enterprise-catalog,DNS:${RELEASE}-enterprise-notifications,DNS:${RELEASE}-enterprise-policy,DNS:${RELEASE}-enterprise-reports,DNS:${RELEASE}-enterprise-reportsworker,DNS:${RELEASE}-enterprise-simplequeue,DNS:${RELEASE}-enterprise-ui,DNS:${RELEASE}-enterprise-datasyncer"
# NOTE: This ends up being the filename of resulting certificates
SERVER_NAME="anchore"

ANCHORE_TLS=./anchore-tls
SCRIPT=$(readlink -f "$0")
DIR=$(dirname "$SCRIPT")

echo "Making ANCHORE_TLS dir..."
mkdir -p ${ANCHORE_TLS}
cd ${ANCHORE_TLS}

# Customize as needed
CORPORATION=Anchore
GROUP=K8S
CITY=Atlanta
STATE=Georgia
COUNTRY=US

CERT_AUTH_PASS=`openssl rand -base64 32`
echo $CERT_AUTH_PASS > cert_auth_password
CERT_AUTH_PASS=`cat cert_auth_password`

# create the certificate authority
openssl \
  req \
  -subj "/CN=$SERVER_NAME.ca/OU=$GROUP/O=$CORPORATION/L=$CITY/ST=$STATE/C=$COUNTRY" \
  -new \
  -x509 \
  -passout pass:$CERT_AUTH_PASS \
  -keyout ca-cert.key \
  -out ca-cert.crt \
  -days 3650

# create client private key (used to decrypt the cert we get from the CA)
openssl genrsa -out $SERVER_NAME.key

# create the CSR(Certitificate Signing Request)
openssl \
  req \
  -new \
  -nodes \
  -subj "/CN=$SERVER_NAME/OU=$GROUP/O=$CORPORATION/L=$CITY/ST=$STATE/C=$COUNTRY" \
  -sha256 \
  -extensions v3_req \
  -reqexts SAN \
  -key $SERVER_NAME.key \
  -out $SERVER_NAME.csr \
  -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=$SANS")) \
  -days 3650

# sign the certificate with the certificate authority
openssl \
  x509 \
  -req \
  -days 3650 \
  -in $SERVER_NAME.csr \
  -CA ca-cert.crt \
  -CAkey ca-cert.key \
  -CAcreateserial \
  -out $SERVER_NAME.crt \
  -extfile <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=$SANS")) \
  -extensions SAN \
  -passin pass:$CERT_AUTH_PASS

# Create key pem
cat $SERVER_NAME.key > $SERVER_NAME-key.pem

# Create cert pem
cat $SERVER_NAME.crt > $SERVER_NAME.pem
cat ca-cert.crt >> $SERVER_NAME.pem
cd ..
cat << EOF > anchore-tls-certs.yaml
---
apiVersion: v1
kind: Secret
metadata:
  name: anchore-tls-certs
  namespace: $NAMESPACE
type: Opaque
data:
  internal-cert.pem: $(base64 anchore-tls/$SERVER_NAME.pem --wrap 0)
  internal-cert-key.pem: $(base64 anchore-tls/$SERVER_NAME-key.pem --wrap 0)
  ca.pem: $(base64 anchore-tls/ca-cert.crt --wrap 0)
EOF
kubectl apply -n $NAMESPACE -f anchore-tls-certs.yaml

If Custom CA certificates are required for LDAP or Postgres be sure to append them to the anchore-tls/ca-cert.crt & anchore-tls/anchore.pem file.

The script can be called as follows:


chmod +x generate-anchore-tls-certs.sh
# Provide values for your kubernetes namespace containing anchore and your helm release
./generate-anchore-tls-certs.sh $NAMESPACE $RELEASE

Make the following adjustments in your helm values file:

certStoreSecretName: anchore-tls-certs

anchoreConfig:
  internalServicesSSL:
    enabled: true
    verifyCerts: false
    certSecretKeyFileName: internal-cert-key.pem
    certSecretCertFileName: internal-cert.pem

ui:
  ldapsRootCaCertName: ca.pem

You will need to perform a Helm install or upgrade to apply all the changes and restart all the pods.

If using an ingress controller see the next section.

Separating API & UI Ingress for Helm deployments

If you are using ingress to access Anchore Enterprise API & Anchore Enterprise UI then you will need to seperate the ingress configuration if you configure either External or Internal TLS. Since API supports TLS and UI does not once TLS is enabled for API the ingress controller will need to send encrypted traffic to API and unencrypted traffic to UI.

Make the following change to your helm values file to configure the ingress to use TLS to communicate with the API service:

ingress:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    # These are optional
    nginx.ingress.kubernetes.io/proxy-body-size: '0'
    nginx.ingress.kubernetes.io/proxy-read-timeout: '600'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '600'
  uiHosts: []

Add an ingress directly in kubernetes for the UI:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: anchore-ingress-ui
  annotations:
    # These are optional
    nginx.ingress.kubernetes.io/proxy-body-size: '0'
    nginx.ingress.kubernetes.io/proxy-read-timeout: '600'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '600'
spec:
  ingressClassName: nginx # NOTE: This could be another value such as alb
  tls: []
  rules:
    - host: anchore.yourdomain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: anchore-enterprise-ui # Ensure this matches the name of the Anchore UI service
                port:
                  number: 80

Apply UI Ingress changes:

kubectl apply -n $NAMESPACE -f anchore-ingress-ui.yaml

External TLS for Anchore Enterprise API Service

Please note that configuring Internal TLS above also includes External TLS. If you performed the steps above then this is not necessary.

In the following example the external API service is configured to listen on port 443 and is configured with a certificate for its external hostname anchore.example.com

Each service published in the Anchore Enterprise configuration (apiext, catalog, simplequeue, analyzer, policy_engine and kubernetes_webhook) can be configured to use transport level security.

services:
  apiext:
    enabled: True
    endpoint_hostname: 'anchore.example.com'
    listen: '0.0.0.0'
    port: 443
    ssl_enable: True
    ssl_cert: '/config/anchore-ex.crt'
    ssl_key: '/config/anchore-ex.key'
    ssl_chain: '/config/anchore-ex.crt'
SettingNotes
enabledIf the service is enabled
endpoint_hostnameDNS name of service
listenIP address of interface on which the service should listen (use ‘0.0.0.0’ for all - default)
portPort on which service should listen.
ssl_enableEnable transport level security
ssl_certname, including full path of private key file.
ssl_chain[optional] name, including full path of certificate chain

The certificate files should be placed on a path accessible to the Anchore Enterprise service, for example in the /config directory which is typically mapped as a volume into the container. Note that the location outside the container will depend on your configuration - for example if you are volume mounting ‘/path/to/aevolume/config/’ on the docker host to ‘/config’ within the container, you’ll need to place the ssl files in ‘/path/to/aevolume/config/’ on the docker host, so that they are accessible in ‘/config/’ inside the container, before starting the service.

The ssl_chain file is optional and may be required by some certificate authorities. If your certificate authority provides a chain certificate then include it within the configuration.

Note: While a certificate may be purchased from a well-known and trusted certificate authority in some cases the certificate is signed by an intermediate certificate which is not included within a TLS/SSL clients trust stores. In these cases the intermediate certificate is signed by the certificate authority however without the full ‘chain’ showing the provenance and integrity of the certificate the TLS/SSL client may not trust the certificate.

Any certificates used by the Anchore Enterprise services need to be trusted by all other Anchore Enterprise services.

If an internal certificate authority is used the root certificate for the internal CA can be added to the Anchore Enterprise using the following procedure or SSL verification can be disabled by setting the following parameter:

internal_ssl_verify: True

4.15 - Notifications

Overview

Alert external endpoints (Email, GitHub, Slack, and more) about Anchore events such as policy evaluation results, vulnerability updates, and system errors with our new Notifications service. Configure notification endpoints and manage which specific events you need through Anchore Enterprise UI.

For more information on the Notifications Service in general, its concepts, and details on its configuration, please refer to the Notifications Service.

The following sections in this document describe the current endpoints available for configuration, the options provided for selecting events, the various actions you can do with a configuration (add, edit, test, and remove), and how to disable an endpoint as an admin.

Anchore Enterprise includes Notifications service to alert external endpoints about the system’s activity. Services that make up Anchore Enterprise generate events to record significant activity, such as an update, in the policy evaluation result or vulnerabilities for a tag, or an error analyzing an image. This service provides a mechanism to selectively notify events of interest to supported external endpoints. The actual notification itself depends on the endpoint - formatted message to Slack, email and MS Teams endpoints, tickets in GitHub and Jira endpoints, and JSON payload to webhook endpoint.

Glossary

Event
An information packet generated by Anchore Enterprise service to indicate some activity.
Endpoint
External tool capable of receiving messages such as Slack, GitHub, Jira, MS Teams, email or webhook.
Endpoint Configuration
Connection information of the endpoint used for sending messages.
Selector
Criteria for selecting events to be notified.

Installation

Anchore Enterprise Notifications is included with Anchore Enterprise, and is installed by default when deploying a trial quickstart with Docker Compose, or a production deployment Kubernetes.

Configuration

Enterprise Notifications Service

The service loads configuration from the notifications section of the config.yaml. See the following snippet of the configuration:

...
services:
  notifications:
    enabled: true
    require_auth: true
    endpoint_hostname: "<hostname>"
    listen: '0.0.0.0'
    port: 8228
    cycle_timers:
      notifications: 30
    # Set Anchore Enterprise UI base URL to receive UI links in notifications
    ui_url: "<enterprise-ui-url>"

The cycle_timers -> notifications controls how often the service checks for events in the system that need to be processed for notifications. Default is every 30 seconds.

The ui_url is used for constructing links to Enterprise UI pages in the notifications. Configure this property to the Enterprise UI’s base URL. This URL should be accessible from the endpoint receiving the notification for the links to work correctly. If the value is not set, the notification message is still be sent to the endpoint, but it won’t contain a clickable link to the Enterprise UI.

Note: Any changes to the configuration requires a restart of the service for the updates to take effect.

RBAC Permissions

In the Anchore Enterprise deployment, the table below lists the required actions and containing roles:

DescriptionActionRoles
List all the available notification endpoints and their statuslistNotificationEndpointsRead Only, Read Write
List all available configurations for an endpointlistNotificationEndpointConfigurationsRead Only, Read Write
Get an endpoint configuration and associated selectorsgetNotificationEndpointConfigurationRead Only, Read Write
Create an endpoint configuration and associated selectorscreateNotificationEndpointConfigurationRead Write
Update an endpoint configuration and associated selectorsupdateNotificationEndpointConfigurationRead Write
Delete an endpoint configuration and associated selectorsdeletetNotificationEndpointConfigurationRead Write

External Tools

To send notifications to an external tool/endpoint, the service requires connection information to that endpoint. See the following for the required information for each endpoint:

Concepts

Endpoint Status

All endpoints in the Notifications service can be toggled as Enabled or Disabled. The endpoint status reflects the eabled or disabled state. By default, the status for all endpoints is enabled by default. Setting the endpoint status to disabled stops all notifications from going out to any configurations of that specific endpoint. This is a system-wide setting that can only be updated by the admin account. It is read-only for remaining accounts.

Endpoint Configuration

The endpoint configuration is the connection information such as URL, user credentials, and so on for an endpoint. The service allows multiple configurations per endpoint. Endpoint configurations are scoped to the account.

Selector

The services provides a mechanism to selectively choose notifications and route them to a configured endpoint. This is achieved using a Selector, which is a collection of filtering criteria. Each event is processed against a Selector to determine whether it is a match or not. If the Selector matches the event, a notification is sent to the configured endpoint by the service.

For a quick list of useful notifications and associated Selector configurations, see Quick Selection.

A Selector encapsulates the following distinct filtering criteria: scope, level, type, and resource type. Some allow a limited set of values, and others wildcards. The value for each criteria has to be set for the matching to compute correctly.

Scope

Allowed values: account, global

Events are scoped to the account responsible for the event creation.

  • account scope matches events associated with a user account.
  • global scope matches events of any and all users. global scope is limited to admin account only. Non-admin account users can only specify account as the scope.
Level

Allowed values: info, error, *

Events are associated with a level that indicates whether the underlying activity is informational or resulted in an error. - info matches informational events such as policy evaluation or vulnerabilities update, image analysis completion and so on.

  • error matches failures such as image analysis failure.
  • * will match all events.
Type

Allowed values: strings with or without regular expressions

Event types have a structured format <category>.<subcategory>.<event>. Thus, * matches all types of events. Category is indicative of the origin of the event.

  • system.* matches all system events.
  • user.* matches events that are relevant to individual consumption.
  • Omitting an asterisk will do an exact match. See the GET /event_types route definition in the external API for the list of event types.
Resource Type

Allowed values: *, image_digest, image_tag, image_reference, repository, feeds, feed, feed_group, *

In most cases, events are generated during an operation involving a resource. Resource type is metadata of that resource. For instance image_tag is the resource type in a policy evaluation update event. * matches all resource types if you are uncertain what resource type to use.

Quick Selection

The following Selector configurations are for notifying a few interesting events.

ReceiveScopeLevelTypeResource Type
Policy evaluation and vulnerabilities updatesaccount*user.checks.**
User errorsaccounterroruser.**
User infosaccountinfouser.**
Everything user relatedaccount*user.**
System errorsaccounterrorsystem.**
System infosaccountinfosystem.**
Everything system relatedaccount*system.**
Allaccount***
All for every account (admin account only)global***

Notifications UI Walkthrough

alt_text

Supported Endpoints

Email
Send notifications to a specific SMTP mail service.
GitHub
Version control for software development using Git.
JIRA
Issue tracking and agile product management software by Atlassian.
Slack
Team collaboration software tools and online services by Slack Technologies.
Teams
Team collaboration software tools and online services by Microsoft.
Webhook
Send notifications to a specific API endpoint.

Event Selector Options

alt_text

When adding or editing a configuration, selecting which events to be notified on can be as easy as choosing one of the above three options: All Notification Events, Policy & Vulnerability Events, or Error Events.

Advanced users can select Add Custom Selector for more granularity:

alt_text

In the example shown, we configure to be notified on all system info events affecting any resource associated with the user’s account. For an in-depth explanation on the provided properties and their possible values, view our Selector documentation.

Adding a Configuration

alt_text

If you haven’t already defined a configuration for an endpoint, simply click Let’s Add One! as shown above. Once you have, add additional configurations with Add New Configuration as shown below.

alt_text

Upon doing so, a modal will appear with various properties shown on the left side. Note that based on the type of endpoint, these properties may differ.

To view the various requirements, check the documentation for Email, GitHub, JIRA, Slack, and Teams.

alt_text

For more information on adding a custom selector, please view our Selector documentation.

Prior to saving your new configuration, feel free to test with the Test Configuration button. Then save with OK.

Note: If OK is not enabled, be sure all required fields have been filled out.

Editing a Configuration

alt_text

The process to edit a configuration entry is started by clicking Edit which is found within the Actions column as shown above.

alt_text

The various fields available for editing are the same shown when adding the configuration. For additional info on a specific field, hover over the provided question icon circled with an orange ring next to the field name.

At any time, you can select Cancel to disregard any changes you’ve made.

For testing any new changes prior to saving them, click Test Configuration.

To save your changes, click OK. If OK is not enabled, be sure all required fields have been filled out.

Testing a Configuration

alt_text

When viewing your configurations, testing is easy - just look under the Actions column and click Test for that entry.

alt_text

Otherwise, when adding or editing a configuration, search for the test button pictured above. It can be found near the bottom of the modal, next to the Cancel and OK buttons.

Removing a Configuration

alt_text

To remove a specific notification configuration, simply click on the Remove button (as shown above) within the Actions column for that entry.

Select Yes to proceed with the deletion process or No to cancel. Please note that once you agree to remove the configuration, you won’t be able to recover it.

Admin-specific Actions

alt_text

Disabling Endpoints

As an admin, navigate to System > Notifications and click on the toggle visible in the lower-right corner of the specific endpoint you’re aiming to disable.

By default, all endpoints (such as Email, Slack, and Webhook) are enabled out of the box. Disabling a specific endpoint requires admin privileges as it ensures all notifications are stopped from going out to any configuration for that endpoint system-wide.

Note that users are still able to add, edit, test, and remove notification configuration items, but no event messages will be sent for that endpoint until it is re-enabled.

4.15.1 - Slack

Notifications to Slack are in the form of messages to a channel.

Requirements

Do the following to receive Slack notifications.

  1. An Incoming Webhook URL is required. Follow instructions to create one.
  2. Copy the Incoming Webhook URL.
  3. Create a Slack endpoint configuration in the Notifications service either via Enterprise UI or the API directly.

4.15.2 - GitHub

Notifications to GitHub are in the form of new issues in a repository.

Requirements

Do the following to receive GitHub notifications.

  1. Provide the following required GitHub account and repository-related information.
    • Username of the account for creating issues.
    • Token with repository access. Follow instructions to create a new Access Token in the account used for creating issues.
    • Name of the repository where the issues are created.
    • Owner of the repository.
    • (Optional) GitHub API URL, defaults to https://api.github.com if not specified.
    • (Optional) milestone assigned to the issue.
    • (Optional) one or more labels assigned to the issue.
    • (Optional) one or more GitHub users assigned to the issue.
  2. Create a GitHub endpoint configuration in the Notifications service either via Enterprise UI, or the API directly.

4.15.3 - Jira

Notifications to Jira are in the form of new issues in a project.

Requirements

Do the following to receive Jira notifications.

  1. Provide the following required Jira account and project related information.
    • URL of the Jira project.
    • Username of the account for creating issues.
    • API token or password depending on the Jira project.
      • For Jira Cloud projects an API token is required. Follow instructions to create a new API token for the account creating issues.
      • For Jira Self-managed projects password of the account creating issues is required.
    • Project key, same as the prefix in the issue identifier. For instance, issue TEST-1 has the project key TEST.
    • Type of the issue created such as Bug, Task, Story, and so on.
    • (Optional) priority assigned to the issue such as Low, Medium, High, and so on.
    • (Optional) one or more labels to be assigned to the issue.
    • (Optional) Jira user to be assigned to the issue.
  2. Create a Jira endpoint configuration in the Notifications service either via Enterprise UI, or the API directly.

4.15.4 - SMTP

Notifications to SMTP server are in the form of html and plain text message packets.

Requirements

Do the following to receive SMTP notifications.

  1. Provide the following required SMTP server information.
    • Host name/address
    • Port number
    • From address
    • To address
    • (Optional) username
    • (Optional) password
    • (Optional) use tls to encrypt connection
  2. Create an SMTP endpoint configuration in the Notifications service either via Enterprise UI, or the API directly.

4.15.5 - Microsoft Teams

Notifications to Microsoft Teams are in the form of messages to a channel.

Requirements

Do the following to receive Microsoft Teams notifications.

  1. An incoming webhook URL is required. See the Microsoft instructions to create one.
  2. Copy the incoming webhook URL.
  3. Create a Microsoft Teams endpoint configuration in the Notifications service either via Enterprise UI, or the API directly.

4.16 - Reports

Overview

Anchore Enterprise Reports aggregates data to provide insightful analytics and metrics for account-wide artifacts. The service employs GraphQL to expose a rich API for querying the aggregated data and metrics.

NOTE: This service captures a snapshot of artifacts in Anchore Enterprise at a given point in time. Therefore, analytics and metrics computed by the service are not in real time, and may not reflect most up-to-date state in Anchore Enterprise.

Installation

Anchore Enterprise Reports is included with Anchore Enterprise, and is installed by default when deploying a trial quickstart with Docker Compose, or a production deployment Kubernetes.

How it works

One of the main functions of Anchore Enterprise Reports is aggregating data. The service keeps a summary of all current and historical images and tags for every account known to Anchore Enterprise. It also maintains vulnerability reports and policy evaluations generated using the active bundle for all the images and tags respectively.

WARNING: Anchore Enterprise Reports shares a persistence layer with Anchore Enterprise. Ensure sufficient storage is provisioned.

Configuration

Anchore Enterprise Reports are broken up into two services:

  • The reports_worker service which is responsible for the ingress and egress of data into our reports.
  • The reports service which is responsible for the report generation.

Each service has a configuration section in the values file. Below are sample configurations and the default values.

...
services:
  reports_worker:
    # Set enable_data_ingress to true for periodically syncing data from anchore enterprise into the reports service
    enable_data_ingress: true
    
    # Set enable_data_egress to true to periodically remove reporting data that has been removed in other parts of system
    enable_data_egress: false
    
    # data_egress_window defines a number of days to keep reporting data following its deletion in the rest of system.
    # Default value of 0 will remove it on next task run
    data_egress_window: 0
    
    # data_refresh_max_workers is the maximum number of concurrent threads to refresh existing results (etl vulnerabilities and evaluations) in reports service. Set non-negative values greater than 0, otherwise defaults to 10
    data_refresh_max_workers: 10
    
    # data_load_max_workers is the maximum number of concurrent threads to load new results (etl vulnerabilities and evaluations) to reports service. Set non-negative values greater than 0, otherwise defaults to 10
    data_load_max_workers: 10
    
    cycle_timers:
      # Timers that describe how often each operation should run
      reports_image_load: 600  # MIN 300 MAX 100000 Default 600
      reports_tag_load: 600  # MIN 300 MAX 100000 Default 600
      reports_runtime_inventory_load: 600  # MIN 300 MAX 100000 Default 600
      reports_extended_runtime_vuln_load: 1800 # MIN 300 MAX 100000 Default 1800
      reports_image_refresh: 7200  # MIN 3600 MAX 100000 Default 7200
      reports_tag_refresh: 7200  # MIN 3600 MAX 100000 Default 7200
      reports_metrics: 3600  # MIN 1800 MAX 100000 Default 3600
      reports_image_egress: 600  # MIN 300 MAX 100000 Default 600
      reports_tag_egress: 600  # MIN 300 MAX 100000 Default 600
      
    runtime_report_generation:
      # Provides the ability to enable/disable individual runtime report loading.
      inventory_images_by_vulnerability: true
      vulnerabilities_by_k8s_namespace: true
      vulnerabilities_by_k8s_container: true
      vulnerabilities_by_ecs_container: true

  reports:
    # GraphiQL is a GUI for editing and testing GraphQL queries and mutations.
    # Set enable_graphiql to true and open http://<host>:<port>/v2/reports/graphql in a browser for reports API
    enable_graphiql: true
    
    # This is the number of execution threads which will be used during report generation.
    max_async_execution_threads: 1
    
    # Configure async_execution_timeout to adjust how long a scheduled query must be running for before it is considered timed out
    # This may need to be adjusted if the system has large amounts of data and reports are being prematurely timed out.
    # The value should be a number followed by "w", "d", or "h" to represent weeks, days or hours
    async_execution_timeout: "48h"

    # Set use_volume to `true` to have the reports worker buffer report generation to disk instead of in memory. This should be configured
    # in production systems with large amounts of data (10s of thousands of images or more). Scratch volumes should be configured for the reports pods
    # when this option is enabled.
    use_volume: false

NOTE: Any changes to the configuration requires a restart of the service for the updates to take effect.

In an Anchore Enterprise deployment, any non-admin account user must at least have listImages permission to execute queries against Reports API. There RBAC Role available called report-admin which provides permissions to administer reports and schedules. Please see Role-Based Access Control for more information.

Data ingress

Reports_worker service handles data ingress from Anchore Enterprise via the following asynchronous processes triggered periodically:

  • Loader: Compares the working set of images and tags in Anchore Enterprise with its own records. Based on the difference, images and tags along with the vulnerability report and policy evaluations are loaded into the service. Artifacts deleted from Anchore Enterprise are marked inactive in the service.

    This process is triggered periodically as described by the cycle timers listed above.

  • Refresher: Refreshes the vulnerability report and policy evaluations of all the images and tags actively maintained by the service.

    This process is triggered periodically as described by the cycle timers listed above.

WARNING: Reports service may miss updates to artifacts if they are added and deleted in between consecutive ingress processes.

Data ingress is enabled by default. It can be turned off with enable_data_ingress: false in the config.yaml snippet shown previously. In a quickstart deployment, add ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_INGRESS=false to the environment variables section of the reports service in docker-compose.yaml. When the ingress is turned off, Reports service will no longer aggregate data from Anchore Enterprise, metric computations will also come to a halt. However, the service will continue to serve API requests/queries with the existing data.

Data egress

Provides the ability to remove data which is no longer active in Anchore Enterprise from the stored report data. This process is disabled by default and controlled by the value enable_data_egress. A configuration setting to determine how old this data is prior to its removal data_egress_window is also available.

Metrics

Reports service comes loaded with a few pre-defined/canned metric definitions. A metric definition consists of an identifier, readable name, description and the type of the metric. The type is loosely based on statsd metric types. Currently, all the pre-defined metrics are of type ‘counter’ - a measure of the number of items that match certain criteria. A value for each of these metric definitions is computed using the data aggregated by the service.

All metric values are computed periodically every hour (3600 seconds). To modify the interval, update cycle_timers -> reports_metrics in the config.yaml snippet above. In a quickstart deployment, add ANCHORE_ENTERPRISE_REPORTS_METRICS_INTERVAL_SEC=<interval-in-seconds> to the environment variables section of the reports service in docker-compose.yaml.

See it in action

To see Reports service in the Enterprise UI, see Dashboard or Reporting & Remediation view. The dashboard view utilizes metrics generated by the service and renders customizable widgets. The reports view employs graphQL queries and aggregates the results into multiple formats (CSV, JSON, and so on).

For using the API directly, see API Access.

4.17 - Runtime Inventory

Overview

Using Anchore’s runtime inventory agents provides Anchore Enterprise access to what images are being used in your deployments. This can help give insight into where vulnerabilities or policy violations are in your production workloads.

Agents

Anchore provides agents for collecting the inventory of different container runtime environments:

General Runtime Configuration

Inventory Time-To-Live

As part of reporting on your runtime environment, Anchore maintains an active record of the containers, the images they run, and other related metadata based on time they were last reported by an inventory agent.

The configuration setting below allow you to specify how long inventory should remain part of the Catalog Service’s working set. These are the default settings found in the values file.

services:
  catalog:
    runtime_inventory:
      inventory_ingest_overwrite: false
      inventory_ttl_days: 120

Below are a few examples on how you may want to use this feature.

Keep most recently reported inventory

inventory_ingest_overwrite: true
inventory_ttl_days: 7

For each cluster/namespace reported from the inventory agent, the system will delete any previously reported containers and images and replace it with the new inventory.

Note: The inventory_ttl_days is still needed to remove any cluster/namespaces that are no longer reported as well as some of the supporting metadata (ie. pods, nodes). This value should be configured to be long enough that inventory isn’t incorrectly removed in case of an outage from the reporting agent. The exact value depends on each deployment, but 7 days is a reasonable value here.

Keep inventory reported over a time period

inventory_ingest_overwrite: false
inventory_ttl_days: 14

This will delete any container and image that has not been reported by an agent in the last 14 days. This includes its supporting metadata (ie. pods, nodes).

Keep inventory indefinitely

inventory_ingest_overwrite: false
inventory_ttl_days: 0

This will keep any containers, images, and supporting metadata reported by an inventory agent indefinitely.

Deleting Inventory via API

Where it is not desirable to wait for the Image TTL to remove runtime inventory images it is possible to manually delete inventory items via the API by issuing a DELETE to /v2/inventories with the following query parameters.

  • inventory_type (required) - either ecs or kubernetes
  • context (required) - it must match a context as seen by the output of anchorectl inventory list
    • Kubernetes - this is a combination of cluster name (as defined by the anchore-k8s-inventory config) and a namespace containing running containers e.g. cluster1/default.
    • ECS - this is the cluster ARN e.g. arn:aws:ecs:eu-west-2:123456789012:cluster/myclustername
  • image_digest (optional) - set if you only want to remove a specific image

e.g. DELETE /v2/inventories?inventory_type=<string>&context=<string>&image_digest=<string>

Using curl: curl -X DELETE -u username:password "http://{servername:port}/v2/inventories?inventory_type=&context=&image_digest=

4.18 - Storage Configuration

Storage During Analysis

Scratch Space

Anchore uses a local directory for image analysis operations including downloading layers and unpacking the image content for the analysis process. This space is necessary on each analyzer worker service and should not be shared. The scratch space is ephemeral and can have its lifecycle bound to that of the service container.

Layer Cache

The layer cache is an extension of the analyzer’s scratch space that is used to cache layer downloads to reduce analysis time and network usage during the analysis process itself. For more informaiton, see, Layer Caching.

Storing Analysis Results

Anchore Enterprise is a data intensive system and uses external storage systems for all data persistence. None of the services are stateful in themselves.

For structured data that must be quickly queried and indexed, Anchore relies on PostgreSQL as its primary data store. Any database that is compatible with PostgresSQL 13 or higher should work, such as Amazon Aurora and Google Cloud SQL.

For more information, see, Database

For less structured data, Anchore implements an internal object store that can be overlayed on different backend providers, but defaults to also using the main postgres db to reduce the out-of-the-box dependencies. However, S3 is supported for leveraging external systems.

For more information on configuration and requirements for the core database and object stores see, Object Storage.

Analysis Archive

To aid in capacity management, Anchore provides a separate storage location where completed image analysis can be moved to. This reduces consumption of database capacity and primary object storage. It also removes the analysis from most API actions but makes it available to restore into the primary storage systems as needed. The analysis archive is configured as an alternate object store. For more information, see: Configuring Analysis Archive.

4.18.1 - Analysis Archive Storage Configuration

For information on what the analysis archive is and how it works, see Concepts: Analysis Archive

The Analysis Archive is an object store with specific semantics and thus is configured as an object store using the same configuration options, just with a different config key: analysis_archive

Example configuration snippet for using the db for working set object store and S3 for the analysis archive:

...
services:
  ...
  catalog:
  ...
  object_store:
    compression:
      enabled: false
      min_size_kbytes: 100
    storage_driver:
      name: db
      config: {}      
  analysis_archive:
      compression:
        enabled: False
        min_size_kbytes: 100
      storage_driver:
        name: 's3'
        config:
          access_key: 'MY_ACCESS_KEY'
          secret_key: 'MY_SECRET_KEY'
          #iamauto: True
          url: 'https://S3-end-point.example.com'
          region: False
          bucket: 'anchorearchive'
          create_bucket: True

Default Configuration

By default, if no analysis_archive config is found or the property is not present in the config.yaml, the analysis archive will use the object_store or archive (for backwards compatibility) config sections and those defaults (e.g. db if found).

Anchore stores all of the analysis archive objects in an internal logical bucket: analysis_archive that is distinct in the configured backends (e.g a key prefix in the s3 bucket)

Changing Configuration

Unless there are image analyses actually in the archive, there is no data to move if you need to update the configuration to use a different backend, but once an image analysis has been archived to update the configuration you must follow the object storage data migration process found here. As noted in that guide, if you need to migrate to/from an analysis_archive config you’ll need to use the –from-analysis-archive/–to-analysis-archive options as needed to tell the migration process which configuration to use in the source and destination config files used for the migration.

Common Configurations

  1. Single shared object store backend: omit the analysis_archive config, or set it to null or {}

  2. Different bucket/container: the object_store and analysis_archive configurations are both specified and identical with the exception of the bucket or container values for the analysis_archive so that its data is split into a different backend bucket to allow for lifecycle controls or cost optimization since its access is much less frequent (if ever).

  3. Primary object store in DB, analysis_archive in external S3: this keeps latency low as no external service is needed for the object store and active data but lets you use more scalable external object storage for archive data. This approach is most beneficial if you can keep the working set of images small and quickly transition old analysis to the archive to ensure the db is kept small and the analysis archive handles the data scaling over time.

4.18.2 - Database Storage

Anchore stores all metadata in a structured format in a PostgreSQL database to support API operations and searches.

Examples of data persisted in the database:

  • Image metadata (distro, version, layer counts, …)
  • Image digests to tag mapping (docker.io/nginx:latest is hash sha256:abcd at time t)
  • Image analysis content indexed for policy evaluation (files, packages, ..)
  • Feed data
    • vulnerability info
    • package info from upstream (gem/npm)
  • Accounts, users…

If the object store is not explicitly set to an external provider, then that data is also persisted in the database but can be migrated

Reducing Database Storage Usage

Beyond enabling a non-DB object store there are some configuration options to reduce database storage and IO used by Anchore.

Configuration of Indexed DB Storage for Package DB File Entries

There is a configuration option for the policy engine service to disable the usage of the database for storing indexed package database entries from each analyzed image. This data represents the files in each distro package and their metadata (digests and permissions) from each scanned image in the image_package_db_entries table. That table is only used by the policy engine to deliver the policy trigger [‘packages.verify’], but if you do not use that trigger then the use of the storage can be disabled thereby reducing database load and resource usage. The data can be quite large, often in the thousands of rows per analyzed image, so for some customers that do not use this data for policy, disabling the loading of this data can reduce database consumption significantly.

Disabling Indexed DB Storage for Package DB File Entries

In each policy engine’s config.yaml file, change:

enable_package_db_load: true

to

enable_package_db_load: false

You can configure this by adding an environment variable ANCHORE_POLICY_ENGINE_ENABLE_PACKAGE_DB_LOAD with your chosen value on the policy engine service for both Compose and Helm deployments. This is enabled or set to ’true’ by default.

Note that disabling the table usage will also disable support for the packages.verify trigger and any policies that have the trigger in a rule will be considered invalid and return errors on evaluation. Any new policies that attempt to use the trigger will be rejected on upload as invalid if the trigger is included.

Once this configuration is set, you may delete data in that db table to reclaim some database storage capacity. If you’re interested in this option please contact support for guidance on this process.

Enabling Indexed DB Storage for Package DB File Entries

If you find that you do need the trigger, you can change the configuration to use the table then support will be restored. However, any images analyzed while the setting was ‘false’ will need to be re-analyzed in order to populate their data in that table correctly.

4.18.3 - Layer Caching

Once an image is submitted to Anchore Enterprise for analysis the system will attempt to retrieve metadata about the image from the Docker registry and if successful will download the image and queue the image for analysis.

Anchore Enterprise can run one or more analyzer services to scale out processing of images. The next available analyzer worker will process the image.

Docker Images are made up of one or more layers, which are described in the manifest. The manifest lists the layers which are typically stored as gzipped compressed TAR files.

As part of image analysis Anchore Enterprise will:

  • Download all layers that comprise an image
  • Extract the layers to a temporary file system location
  • Perform analysis on the contents of the image including:
    • Digest of every file (SHA1, SHA256 and MD5)
    • File attributes (size, owner, permissions, etc)
    • Operating System package manifest
    • Software library package manifest (NPM, GEM, Java, Python, NuGet)
    • Scan for secret materials (api keys, private keys, etc

Following the analysis the extracted layers and downloaded layer tar files are deleted.

In many cases the images will share a number of common layers, especially if images are built form a consistent set of base images. To speed up Anchore Enterprise can be configure to cache image layers to eliminate the need to download the same layer for many different images. The layer cache is displayed in the default Anchore Enterprise configuration. To enable the cache the following changes should be made:

  1. Define temporary directory for cache data

It is recommended that the cache data is stored in an external volume to ensure that the cache does not use up the ephemeral storage space allocated to the container host.

By default Anchore Enterprise uses the /tmp directory within the container to download and extract images. Configure a volume to be mounted into the container at a specified path and configure this path in config.yaml

tmp_dir: '/scratch'

In this example a volume has been mounted as /scratch within the container and config.yaml updated to use /scratch as the temporary directory for image analysis.

With the cache disabled the temporary directory should be sized to at least 3 times the uncompressed image size to be analyzed. To enable layer caching the layer_cache_enable parameter and layer_cache_max_gigabytes parameter should be added to the analyzer section of the Anchore Enterprise configuration file config.yaml.

analyzer:
    enabled: True
    require_auth: True
    cycle_timer_seconds: 1
    analyzer_driver: 'nodocker'
    endpoint_hostname: '${ANCHORE_HOST_ID}'
    listen: '0.0.0.0'
    port: 8084
    layer_cache_enable: True
    layer_cache_max_gigabytes: 4

In this example the cache is set to 4 gigabytes. The temporary volume should be sized to at least 3 times the uncompressed image size + 4 gigabytes.

  • The minimum size for the cache is 1 gigabyte.
  • The cache users a least recently used (LRU) policy.
  • The cache files will be stored in the anchore_layercache directory of the /tmp_dir volume.

4.18.4 - Object Storage

Anchore Enterprise uses a PostgreSQL database to store structured data for images, tags, policies, subscriptions and metdata about images, but other types of data in the system are less structured and tend to be larger pieces of data. Because of that, there are benefits to supporting key-value access patterns for things like image manifests, analysis reports, and policy evaluations. For such data, Anchore has an internal object storage interface that, while defaulted to use the same Postgres database for storage, can be configured to use external object storage providers to support simpler capacity management and lower costs. The options are:

  • PostgreSQL database (default)
  • S3 Object Store

The configuration for the object store is set in the catalog’s service configuration in the config.yaml.

4.18.4.1 - Migrating Data to New Drivers

Overview

To cleanly migrate data from one archive driver to another, Anchore Enterprise includes some tooling that automates the process in the ‘anchore-manager’ tool packaged with the system.

The migration process is an offline process; Anchore Enterprise is not designed to handle an online migration.

For the migration process you will need:

  1. The original config.yaml used by the services already, if services are split out or using different config.yaml for different services, you need the config.yaml used by the catalog services
  2. An updated config.yaml (named dest-config.yaml in this example), with the archive driver section of the catalog service config set to the config you want to migrate to
  3. The db connection string from config.yaml, this is needed by the anchore-manager script directly
  4. Credentials and resources (bucket etc) for the destination of the migration

At a high-level the process is:

  1. Shutdown all anchore enterprise services and components. The system should be fully offline, but the database must be online and available. For a docker compose install, this is achieved by simply stopping the engine container, but not deleting it.
  2. Prepare a new config.yaml that includes the new driver configuration for the destination of the migration (dest-config.yaml) in the same location as the existing config.yaml
  3. Test the new dest-config.yaml to ensure correct configuration
  4. Run the migration
  5. Get coffee… this could take a while if you have a lot of analysis data
  6. When complete, view the results
  7. Ensure the dest-config.yaml is in place for all the components as config.yaml
  8. Start anchore-engine

Migration Example Using Docker Compose Deployed Anchore Engine

The following is an example migration for an anchore-engine deployed via docker compose on a single host with a local postgresql container–basically the example used in ‘Installing Anchore Engine’ documents. At the end of this section, we’ll cover the caveats and things to watch for a multi-node install of anchore engine.

This process requires that you run the command in a location that has access to both the source archive driver configuration and the new archive driver configuration.

Step 1: Shutdown all services

All services should be stopped, but the postgresql db must still be available and running.

docker compose stop anchore-engine

Step 2: Prepare a new config.yaml

Both the original and new configurations are needed, so create a copy and update the archive driver section to the configuration you want to migrate to

cd config
cp config.yaml dest-config.yaml
<edit dest-config.yaml>

Step 3: Test the destination config

Assuming that config is dest-config.yaml:

[user@host aevolume]$ docker compose run anchore-engine /bin/bash
[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} check /config/dest-config.yaml 
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Using config file /config/dest-config.yaml
[MainThread] [anchore_engine.subsys.object_store.operations/initialize()] [INFO] Archive initialization complete
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Checking existence of test document with user_id = test, bucket = anchorecliconfigtest and archive_id = cliconfigtest
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Creating test document with user_id = test, bucket = anchorecliconfigtest and archive_id = cliconfigtest
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Checking document fetch
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Removing test object
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Archive config check completed successfully

Step 3a: Test the current config.yaml

If you are running the migration for a different location than one of the anchore engine containers

Same as above but using /config/config.yaml as the input to check (skipped in this instance since we’re running the migration from the same container)

Step 4: Run the Migration

By default, the migration process will remove data from the source once it has confirmed it has been copied to the destination and the metadata has been updated in the anchore db. To skip the deletion on the source, use the ‘–nodelete’ option. it is the safest option, but if you use it, you are responsible for removing the data later.

[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} migrate /config/config.yaml /config/dest-config.yaml 
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Loading configs
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration from config: {
  "storage_driver": {
    "config": {}, 
    "name": "db"
  }, 
  "compression": {
    "enabled": false, 
    "min_size_kbytes": 100
  }
}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration to config: {
  "storage_driver": {
    "config": {
      "access_key": "9EB92C7W61YPFQ6QLDOU", 
      "create_bucket": true, 
      "url": "http://minio-ephemeral-test:9000/", 
      "region": false, 
      "bucket": "anchore-engine-testing", 
      "prefix": "internaltest", 
      "secret_key": "TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s"
    }, 
    "name": "s3"
  }, 
  "compression": {
    "enabled": true, 
    "min_size_kbytes": 100
  }
}
Performing this operation requires *all* anchore-engine services to be stopped - proceed? (y/N)y
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Initializing migration from {'storage_driver': {'config': {}, 'name': 'db'}, 'compression': {'enabled': False, 'min_size_kbytes': 100}} to {'storage_driver': {'config': {'access_key': '9EB92C7W61YPFQ6QLDOU', 'create_bucket': True, 'url': 'http://minio-ephemeral-test:9000/', 'region': False, 'bucket': 'anchore-engine-testing', 'prefix': 'internaltest', 'secret_key': 'TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s'}, 'name': 's3'}, 'compression': {'enabled': True, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/migration_context()] [INFO] Initializing source object_store: {'storage_driver': {'config': {}, 'name': 'db'}, 'compression': {'enabled': False, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/migration_context()] [INFO] Initializing dest object_store: {'storage_driver': {'config': {'access_key': '9EB92C7W61YPFQ6QLDOU', 'create_bucket': True, 'url': 'http://minio-ephemeral-test:9000/', 'region': False, 'bucket': 'anchore-engine-testing', 'prefix': 'internaltest', 'secret_key': 'TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s'}, 'name': 's3'}, 'compression': {'enabled': True, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration Task Id: 1
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Entering main migration loop
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migrating 7 documents
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/policy_bundles/2c53a13c-1765-11e8-82ef-23527761d060
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/manifest_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/analysis_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/image_content_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/manifest_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/analysis_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/image_content_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration result summary: {"last_state": "running", "executor_id": "3209ad44d7bb:37:139731996518208:", "archive_documents_migrated": 7, "last_updated": "2018-08-15T18:03:52.951364", "online_migration": null, "created_at": "2018-08-15T18:03:52.951354", "migrate_from_driver": "db", "archive_documents_to_migrate": 7, "state": "complete", "migrate_to_driver": "s3", "ended_at": "2018-08-15T18:03:53.720554", "started_at": "2018-08-15T18:03:52.949956", "type": "archivemigrationtask", "id": 1}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] After this migration, your anchore-engine config.yaml MUST have the following configuration options added before starting up again:
compression:
  enabled: true
  min_size_kbytes: 100
storage_driver:
  config:
    access_key: 9EB92C7W61YPFQ6QLDOU
    bucket: anchore-engine-testing
    create_bucket: true
    prefix: internaltest
    region: false
    secret_key: TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s
    url: http://minio-ephemeral-test:9000/
  name: s3

Note: If something goes wrong you can reverse the parameters of the migrate command to migrate back to the original configuration (e.g. … migrate /config/dest-config.yaml /config/config.yaml)

Step 5: Get coffee!

The migration time will depend on the amount of data and the source and destination systems performance.

Step 6: View results summary

[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} list-migrations
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
id         state                  start time                         end time                 from        to        migrated count        total to migrate               last updated               
1         complete        2018-08-15T18:03:52.949956        2018-08-15T18:03:53.720554         db         s3              7                      7                2018-08-15T18:03:53.724628   

This lists all migrations for the service and the number of objects migrated. If you’ve run multiple migrations you’ll see multiple rows in this response.

Step 7: Replace old config.yaml with updated dest-config.yaml

[root@3209ad44d7bb ~]# cp /config/config.yaml /config/config.old.yaml
[root@3209ad44d7bb ~]# cp /config/dest-config.yaml /config/config.yaml

Step 8: Restart anchore-engine services

[user@host aevolume]$ docker compose start anchore-engine

The system should now be up and running using the new configuration! You can verify with the anchorectl command by fetching a policy, which will have been migrated:

[root@d8d3f49d9328 /]# anchorectl policy list
 ✔ Fetched policies
┌─────────────────────────┬──────────────────────────────────────┬────────┬──────────────────────┐
│ NAME                    │ POLICY ID                            │ ACTIVE │ UPDATED              │
├─────────────────────────┼──────────────────────────────────────┼────────┼──────────────────────┤
│ Default bundle          │ 2c53a13c-1765-11e8-82ef-23527761d060 │ true   │ 2022-07-14T22:52:27Z │
│ anchore_security_only   │ anchore_security_only                │ false  │ 2022-07-14T22:52:27Z │
│ anchore_cis_1.13.0_base │ anchore_cis_1.13.0_base              │ false  │ 2022-07-14T22:52:27Z │
└─────────────────────────┴──────────────────────────────────────┴────────┴──────────────────────┘

[root@d8d3f49d9328 /]# anchorectl -o json-raw policy get 2c53a13c-1765-11e8-82ef-23527761d060 
[ 
  {
    "blacklisted_images": [], 
    "comment": "Default bundle", 
    "id": "2c53a13c-1765-11e8-82ef-23527761d060", 
... <lots of json>

If that returns the content properly, then you’re all done!

Things to Watch for in a Multi-Node Anchore Engine Installation

  • Before migration: Ensure all services are down before starting migration
  • At migration: Ensure the place you’re running the migration from has the same db access and network access to the archive locations
  • After migration: Ensure that all components get the update config.yaml. Strictly speaking, only containers that run the catalog service need the update configuration, but its best to ensure that any config.yaml in the system which has a services.catalog definition also has the proper and up-to-date configuration to avoid confusion or accidental reverting of the config.

Example Process with docker compose

# ls docker-compose.yaml 
docker-compose.yaml

# docker compose ps
              Name                            Command               State                       Ports                     
--------------------------------------------------------------------------------------------------------------------------
aevolumepy3_anchore-db_1           docker-entrypoint.sh postgres    Up      5432/tcp                                      
aevolumepy3_anchore-engine_1       /bin/sh -c anchore-engine        Up      0.0.0.0:8228->8228/tcp, 0.0.0.0:8338->8338/tcp
aevolumepy3_anchore-minio_1        /usr/bin/docker-entrypoint ...   Up      0.0.0.0:9000->9000/tcp                        
aevolumepy3_anchore-prometheus_1   /bin/prometheus --config.f ...   Up      0.0.0.0:9090->9090/tcp                        
aevolumepy3_anchore-redis_1        docker-entrypoint.sh redis ...   Up      6379/tcp                                      
aevolumepy3_anchore-ui_1           /bin/sh -c node /home/node ...   Up      0.0.0.0:3000->3000/tcp                        

# docker compose stop anchore-engine
Stopping aevolume_anchore-engine_1 ... done

# docker compose run anchore-engine anchore-manager objectstorage --db-connect postgresql+pg8000://postgres:mysecretpassword@anchore-db:5432/postgres check /config/config.yaml.new
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_connect": "postgresql+pg8000://postgres:mysecretpassword@anchore-db:5432/postgres", "db_connect_args": {"timeout": 30, "ssl": false}, "db_pool_size": 30, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Using config file /config/config.yaml.new
...
...

# docker compose run anchore-engine anchore-manager objectstorage --db-connect postgresql+pg8000://postgres:mysecretpassword@anchore-db:5432/postgres migrate /config/config.yaml /config/config.yaml.new
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_connect": "postgresql+pg8000://postgres:mysecretpassword@anchore-db:5432/postgres", "db_connect_args": {"timeout": 30, "ssl": false}, "db_pool_size": 30, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Loading configs
[MainThread] [anchore_engine.configuration.localconfig/validate_config()] [WARN] no webhooks defined in configuration file - notifications will be disabled
[MainThread] [anchore_engine.configuration.localconfig/validate_config()] [WARN] no webhooks defined in configuration file - notifications will be disabled
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration from config: {
  "compression": {
    "enabled": false,
    "min_size_kbytes": 100
  },
  "storage_driver": {
    "name": "db",
    "config": {}
  }
}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration to config: {
  "compression": {
    "enabled": true,
    "min_size_kbytes": 100
  },
  "storage_driver": {
    "name": "s3",
    "config": {
      "access_key": "Z54LPSMFKXSP2E2L4TGX",
      "secret_key": "EMaLAWLVhUmV/f6hnEqjJo5+/WeZ7ukyHaBKlscB",
      "url": "http://anchore-minio:9000",
      "region": false,
      "bucket": "anchorearchive",
      "create_bucket": true
    }
  }
}
Performing this operation requires *all* anchore-engine services to be stopped - proceed? (y/N) y
...
...
...
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration result summary: {"last_updated": "2018-08-14T22:19:39.985250", "started_at": "2018-08-14T22:19:39.984603", "last_state": "running", "online_migration": null, "archive_documents_migrated": 500, "migrate_to_driver": "s3", "id": 9, "executor_id": "e9fc8f77714d:1:140375539468096:", "ended_at": "2018-08-14T22:20:03.957291", "created_at": "2018-08-14T22:19:39.985246", "state": "complete", "archive_documents_to_migrate": 500, "migrate_from_driver": "db", "type": "archivemigrationtask"}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] After this migration, your anchore-engine config.yaml MUST have the following configuration options added before starting up again:
...
...

# cp config/config.yaml config/config.yaml.original

# cp config/config.yaml.new config/config.yaml

# docker compose start anchore-engine
Starting anchore-engine ... done

Migrating Analysis Archive Data

The object storage migration process migrates any data stored in the source config to the destination configuration, if the analysis archive is configured to use the same storage backend as the primary object store then that data is migrated along with all other data, but if the source or destination configurations define different storage backends for the analysis archive than that which is used by the primary object store, then additional paramters are necesary in the migration commands to indicate which configurations to migrate to/from.

The most common migration patterns are:

  1. Migrate from a single backend configuration to a split configuration to move analysis archive data to an external system (db -> db + s3)

  2. Migrate from a dual-backend configuration to a single-backend configuration with a different config (e.g. db + s3 -> s3)

Migrating a single backend to split backend

For example, moving from unified db backend (default config) to a db + s3 configuration with s3 for the analysis archive .

source-config.yaml snippet:

...
services:
...
  catalog:
    ...
    object_store:
      compression:
        enabled: false
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}        
...

dest-config.yaml snippet:

...
services:
...
  catalog:
    ...
    object_store:
      compression:
        enabled: false
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}
    analysis_archive:
      enabled: true
      compression:
        enabled: false
        min_size_kbytes: 100
      storage_driver:
        name: s3
        config: 
          access_key: 9EB92C7W61YPFQ6QLDOU
          secret_key: TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s
          url: 'http://minio-ephemeral-test:9000'
          region: null
          bucket: analysisarchive
      ...   

Anchore stores its internal data in logical ‘buckets’ that are overlayed onto the storage backed in a driver-specific way, so to migrate specific internal buckets (effectively these are classes of data), use the –bucket option in the manager cli. This should generally not be necessary, but for specific kinds of migrations it may be needed.

The following command will execute the migration. Note that the –bucket option is for an internal Anchore logical-bucket, not and actual bucket in S3:

anchore-manager objectstorage --db-connect migrate --to-analysis-archive --bucket analysis_archive source-config.yaml dest-config.yaml

Migrating from dual object storage backends to a single backend

For example, migrating from a db + s3 backend to a single s3 backend in a different bucket:

Example source-config.yaml snippet:

...
services:
...
  catalog:
    ...
    object_store:
      compression:
        enabled: false
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}
    analysis_archive:
      enabled: true
      compression:
        enabled: false
        min_size_kbytes: 100
      storage_driver:
        name: s3
        config: 
          access_key: 9EB92C7W61YPFQ6QLDOU
          secret_key: TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s
          url: 'http://minio-ephemeral-test:9000'
          region: null
          bucket: analysisarchive        
...

The dest config is a single backend. In this case, note the S3 bucket has changed so all data must be migrated.

Example dest-config.yaml snippet:

...
services:
...
  catalog:
    ...
    object_store:
      enabled: true
      compression:
        enabled: false
        min_size_kbytes: 100
      storage_driver:
        name: s3
        config: 
          access_key: 9EB92C7W61YPFQ6QLDOU
          secret_key: TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s
          url: 'http://minio-ephemeral-test:9000'
          region: null
          bucket: newanchorebucket
      ...

First, migrate the object data in the db on the source:

anchore-manager objectstorage --db-connect migrate source-config.yaml dest-config.yaml

Next, migrate the object data in the analysis archive from the old config (s3 bucket ‘analysisarchive’ to the new config (s3 bucket ’newanchorebucket’):

anchore-manager objectstorage --db-connect migrate --from-analysis-archive source-config.yaml dest-config.yaml  

4.18.4.2 - Database Driver

The default object store driver is the PostgreSQL database driver which stores all object store documents within the PostgreSQL database.

A component of the object store driver is the archive_document. When the default object store driver is used, as opposed to a user configuring a S3 bucket, this is the location where image SBOMs, vulnerability scans, policy evaluations, and reports are stored.

Compression is not supported for this driver since the underlying database will handle compression.

There are no configuration options required for the Database driver.

The embedded configuration for anchore enterprise includes the default configuration for the db driver.

object_store:
  compression:
    enabled: False
    min_size_kbytes: 100
  storage_driver:
    name: db
    config: {}

4.18.4.3 - S3 Object Store Driver

Using the S3 driver, data can be stored using Amazon’s S3 storage or any Amazon S3 API compatible system.

object_store:
  compression:
    enabled: False
    min_size_kbytes: 100
  storage_driver:
    name: 's3'
    config:
      access_key: 'MY_ACCESS_KEY'
      secret_key: 'MY_SECRET_KEY'
      #iamauto: True
      url: 'https://S3-end-point.example.com'
      region: False
      bucket: 'anchorearchive'
      create_bucket: True

Example for AWS S3 in us-west-2:

object_store:
  compression:
    enabled: True
    min_size_kbytes: 100
  storage_driver:
    name: 's3'
    config:
#      access_key: 'MY_ACCESS_KEY'
#      secret_key: 'MY_SECRET_KEY'
      iamauto: True
      #url: 'https://S3-end-point.example.com'
      region: us-west-2
      bucket: anchoredata
      create_bucket: False

Example for Minio running in a Docker Compose setup on the same host network as Anchore (container named ‘minio’):

object_store:
  compression:
    enabled: True
    min_size_kbytes: 100
  storage_driver:
    name: 's3'
    config:
      access_key: 'MY_ACCESS_KEY_FOR_MINIO'
      secret_key: 'MY_SECRET_KEY_FOR_MINIO'
      #iamauto: True
      url: 'https://minio:5000'
      #region: us-west-2
      bucket: anchoredata
      create_bucket: False

Compression

The S3 driver supports compression of documents. The documents are JSON formatted and will see significant reduction in size through compression there is an overhead incurred by running compression and decompression on every access of these documents. Anchore Enterprise can be configured to only compress documents above a certain size to reduce unnecessary overhead. In the example below any document over 100kb in size will be compressed.

Authentication

Anchore Enterprise can authenticate against the S3 service using one of two methods:

  • Amazon Access Keys Using this method an Access Key and Secret Access key that have access to read and write to the bucket. Parameters: access_key and secret_key

  • Inherit IAM Role Anchore Enterprise can be configured to inherit the IAM role from the EC2 or ECS instance that Anchore Enterprise is running on or is provided via Kubernetes service account. When launching the EC2 instance that will run Anchore Enterprise you need to specify a role that includes the ability to read and write from the archive bucket. To use IAM roles to authenticate the access_key and secret_access configurations should be replaced by iamauto: True Parameters: iamauto

S3 Endpoint and Bucket

  • url: (required if region not set) A URL to set to reach an S3-API compatible service if you are not using actual Amazon S3. If the URL is configured, the region config value is ignored.
  • region: (required if URL not set) The AWS region that is the primary bucket host (). If you are not using actual S3, this is probably not necessary unless your S3-compatible service requires it. If the ‘URL’ configured, this field is ignored.
  • bucket: (required) The name of the S3 bucket that Anchore will use for storing data.
  • create_bucket: (default: false) Try to create the bucket if it doesn’t already exist. This should be used very sparingly. For most cases, you should pre-create the bucket so that it has the permissions you desire, then set this to false.

Storing Object Store API key in a Kubernetes Secret

You can configure your object store API key to be pulled from a kubernetes secret as follows:

extraEnv:
  - name: ANCHORE_OBJ_STORAGE_ACCESS_KEY
    valueFrom:
      secretKeyRef:
        name: minio-secret
        key: accessKey
  - name: ANCHORE_OBJ_STORAGE_SECRET_KEY
    valueFrom:
      secretKeyRef:
        name: minio-secret
        key: secretKey
anchoreConfig:
  catalog:
    object_store:
      storage_driver:
        name: s3
        config:
          access_key: ${ANCHORE_OBJ_STORAGE_ACCESS_KEY}
          secret_key: ${ANCHORE_OBJ_STORAGE_SECRET_KEY}

In this example the secret was called minio-secret but you can use whatever name you would like. The secret looks as follows:

apiVersion: v1
data:
  accessKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  secretKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
kind: Secret

4.19 - Working with Subscriptions

Introduction

Anchore Enterprise supports 7 types of subscriptions:

  • Tag Update
  • Policy Update
  • Vulnerability Update
  • Analysis Update
  • Alerts
  • Repository Update
  • Runtime Inventory

Enabling some of these will generate a notification when the event is triggered while others may have a more significant impact on the system.

Tag Update

GranularityPer Image Tag
Notification GeneratedYes
Background ProcessYes
Default Timer Frequencyevery 60 min
Default StateDisabled (Unless the Tag is added by AnchoreCTL)
Other ConsiderationsAdds new tag/digest pairs to the system

When the tag_update subscription is enabled, a background process, called a “watcher”, will periodically query the repository for any new image digests with the same tag. For each new image digest found:

  • it will be pulled into the catalog and analyzed
  • a Tag Update Notification will be triggered.

Policy Updates

GranularityPer Image Tag
Notification GeneratedYes
Background ProcessYes
Default Timer Frequencyevery 60 min
Default StateDisabled
Other ConsiderationsNone

This class of notification is triggered if a Tag to which a user has subscribed has a change in its policy evaluation status. The policy evaluation status of an image can be one of two states: Pass or Fail. If an image that was previously marked as Pass changes status to Fail or vice-versa then the policy update notification will be triggered.

The policy status of a Tag may be changed by a number of methods.

  • Change to policy
    • If an policy was changed, for example adding, editing or removing a policy check, then the policy status of an image may be effected. For example adding policy rule that denylists a specific package that is present in a given Tag may cause the Tag’s policy status to move to Fail.
  • Changes to Allowlist
    • If a allowlist is changed to add or remove a CVE then this may cause a policy status change. For example if an image contains a package that is vulnerable to Crticial Severity CVE-2017-9999 then this image may fail in it’s policy evaluation. If CVE-2017-9999 is added to a CVE Allowlist that is mapped to the subscribed Tag then the policy status may change from Fail to Pass.
  • Change in Policy / Allowlist Mapping
    • If the policy mapping is changed then a new policy or allowlist may be applied to an image which may change the status of the image. For example changing the mapping to add a more restrictive policy may change an Tag’s status from Pass to Fail.
  • Change in Package or Vulnerability Data
    • Some policy checks make use of data from external feeds. For example vulnerability checks use CVE data feeds. Changes in data within these feed may change the policy status, such as adding a new CVE vulnerability.

Vulnerability / CVE Update

GranularityPer Image Tag
Notification GeneratedYes
Background ProcessYes
Default Timer Frequencyevery 4 hours
Default StateDisabled
Other ConsiderationsNone

This class of notification is triggered if the list of CVEs or other security vulnerabilities in the image changes.

For example, a user was subscribed to the library/nginx:latest tag. On the 12th of September 2017 a new vulnerability was added to the Debian 9 vulnerability feed which matched a package in the library/nginx:latest image, triggering a notification.

Based on the changes made by the upstream providers of CVE data (operating system vendors and NIST) CVEs may be added, removed or modified – for example a CVE initially marked as severity level Unknown may be upgraded to a higher severity level.

Note: A change to the CVE list in a Tag may not trigger a policy status change based on the policy rules configured for an image. In the example above the CVE had an unknown severity level which may not be tested by the policy mapped to this image.

Analysis Update

GranularityPer Image Tag
Notification GeneratedYes
Background ProcessNo
Default Timer Frequencyn/a
Default StateEnabled
Other ConsiderationsNone

This class of notification is triggered when an image has been analyzed. Typically, this is triggered when a new Tag has been added to the catalog. A common use case for this trigger is to alert an external system that a new Tag was added and has been successfully analyzed. Forcing a re-analysis on an existing image will also cause this notification to be generated.

Alerts

GranularityPer Image Tag
Notification GeneratedNo
Background ProcessYes
Default Timer Frequency10 minutes
Default StateDisabled
Other ConsiderationsEnabling this Subscription may be resource intensive as frequent policy evaluations will occur

The UI and API use stateful alerts that will be raised for policy violations on tags to which you are subscribed for alerts. This raises a clear notification in the UI to help initiate the remediation workflow and address the violations via the remediation feature. Once all findings are addressed the alert is closed, allowing an efficient workflow for users to bring their image’s into compliance with their policy.

Repository Update

GranularityPer Repository
Notification GeneratedNo
Background ProcessYes
Default Timer Frequency60 seconds
Default StateDisabled
Other ConsiderationsAdds all the tags found in a repository to the system

This subscription, when enabled, will query the provided repository for any new tags. Any tag not already managed with in Anchore, will be added. This subscription also provides the ability to determine if the tag_update subscription should be enabled for any new tag added to Anchore. Please see Repositories for more information.

Please Note: Enabling this subscription may add a large number of tags to the system.

Runtime Inventory

GranularityPer Runtime Inventory Context (Cluster/Namespace)
Notification GeneratedNo
Background ProcessYes
Default Timer Frequency2.5 minutes
Default StateDisabled
Other ConsiderationsAdds all the images found in the Context to the system

This subscription, when enabled, will find any newly reported images from the runtime inventory and add them to Anchore to be analyzed.

Please Note: Enabling this subscription may add a large number of tags to the system.

4.20 - User Authentication

Overview

Anchore Enterprise offers Authentication via HTTP basic auth, SAML/SSO, LDAP and API Keys.

For more information about specific types of Anchore authentication, see the following topics:

4.20.1 - API Keys

Overview

API keys, or Application Programming Interface keys, are alphanumeric codes used to authenticate and control access to web-based services or APIs (Application Programming Interfaces). These keys serve as unique identifiers for developers or applications seeking permission to interact with Anchore Enterprise. API keys are commonly employed in software development to manage and secure the flow of data between different applications, allowing authorized access while preventing unauthorized usage. They play a crucial role in ensuring the integrity, security, and controlled usage of APIs, acting as a form of digital credentials for developers to connect their applications to external services.

Generating API Keys

A system user can generate an API key for self use. Some users have specific RBAC roles (ie account-user-admin) that allow management of API keys for other system users. For more details on generating and managing API keys, please refer to this section: Generating API keys

Generating API keys as an SAML (SSO) user

API keys for SAML (SSO) users are disabled by default. To enable API keys for SAML users, please update your helm chart values file with the following:

    user_authentication: 
        allow_api_keys_for_saml_users: true

API keys are an additional authentication mechanism for SAML (SSO) users that bypasses the authentication control of the IDP. When access has been revoked at the IDP, it does not automatically disable the user or revoke all API keys for the user.

Using API Keys

API keys are authenticated using basic auth. In order to use API keys you need to use a special username _api_key and the password is the value that was output when you created the API key.

e.g.

curl -u '_api_key:<API key value>' http://localhost:8228/v2/images

Caveats for API keys

API Keys generally inherit the permissions and roles of the user they were generated for, but there are certain operations you cannot perform using API keys regardless of which user they were generated for:

  • You cannot Add/Edit/Remove Users and Credentials.
  • You cannot Add/Edit/Revoke API Keys.

4.20.2 - User Credential Storage

Overview

All user information is stored in the Anchore DB. The credentials can be stored as plaintext or in a hashed form using the Argon2 hashing algorithm.

Hashed passwords are much more secure, but can be more computationally expensive to compare. Hashed passwords cannot be used for internal service communication since they cannot be reversed. Anchore provides a token based authentication mechanism as well (a simplified Password-Grant flow of Oauth2) to mitigate the performance issue, but it requires that all Anchore services be deployed with a shared secret in the configuration or a public/private keypair common to all services.

Passwords

The configuration of how passwords are stored is set in the user_authentication section of the config.yaml file and must be consistent across all components of an Anchore Enterprise deployment. Mismatch in this configuration between components of the system will result in the system not being able to communicate internally.

user_authentication:
  hashed_passwords: true|false

For all new deployments using the Enterprise Helm chart, hashed_passwords is defaulted to true.

All helm upgrades will carry forward the previous hashed_passwords setting.

NOTE: When the configuration is set to enable hashed_passwords, it must also be configured to use OAuth. When OAuth is not configured in the system, Anchore must be able to use HTTP Basic authentication between internal services and thus requires credentials that can be read.

Bearer Tokens/OAuth2

If Anchore is configured to support bearer tokens, the tokens are generated and returned to the user but never persisted in the database. Users must still have password credentials, however. Password persistence and protection configuration still applies as in the Password section above.

Configuring Hashed Passwords and OAuth

NOTE: Password storage configuration must be done at the time of deployment, it cannot be modified at runtime or after installation with an existing DB since it will invalidate all existing credentials, including internal system credentials and the system will not be functional. You must choose the mechanism at system deployment time.

Set in config.yaml for all components of the deployment:

Option 1: Use a shared secret for signing/verifying oauth tokens

user_authentication:
  oauth:
    enabled: true
  hashed_passwords: true
keys:
  secret: mysecretvalue

Option 2: Use a public/private key pair, delivered as pem files on the filesystem of the containers anchore runs in:

user_authentication:
  oauth:
    enabled: true
  hashed_passwords: true
keys:
  private_key_path: <path to private key pem file>
  public_key_path: <path to public key pem file>

Using environment variables with the config.yaml bundled into the Anchore provided anchore/enterprise image is also an option. NOTE: These are only valid when using the config.yaml provided in the image due to that file referencing them explicitly as replacement values.

ANCHORE_AUTH_SECRET = the string to use as a secret
ANCHORE_AUTH_PUBKEY = path to public key file
ANCHORE_AUTH_PRIVKEY = path to the private key file
ANCHORE_OAUTH_ENABLED = boolean to enable/disable oauth support
ANCHORE_OAUTH_TOKEN_EXPIRATION = the number of seconds a token should be valid (default is 3600 seconds)
ANCHORE_OAUTH_REFRESH_TOKEN_EXPIRATION = the number of second a refresh token is valid (default is 86400 seconds)
ANCHORE_AUTH_ENABLE_HASHED_PASSWORDS = boolean to enable/disable hashed password storage in the anchore db instead of clear text

4.20.3 - LDAP

Overview

The Lightweight Directory Access Protocol (LDAP) is a standardized and widely-used client-server protocol for accessing directory information, and can be enabled in Anchore Enterprise Client to authenticate users against an existing directory server.

In order to configure Anchore Enterprise Client for use with LDAP, the requisite information for connecting and authenticating with an LDAP directory server must first be provided by an administrator. For the purposes of determining what users can see and do once they are logged in, the administrator must also create one or more account association entries, called user mappings.

When an LDAP user authenticates, the Anchore Enterprise account associated with their session is determined by the first user mapping containing a search filter that matches the information in their LDAP record. LDAP authentication will fail if no matches are found, if the associated account is disabled, or if the user’s login credentials are incorrect.

The following sections in this document describe how to configure the Anchore Enterprise Client for use with an LDAP directory server, how to add user mappings, and how to log in to the application as an LDAP user.

Server Connection Properties

Administrators can provide the information used to connect Anchore Enterprise Client to an LDAP server from the LDAP sidetab in the Configuration view. Please note that this sidetab is not visible to non-administrative users.

The connection property fields shown in this view are described below:

PropertyDescription
Server URIThe ldap:// or ldaps:// URI of the LDAP directory server to query.
Manager DNThe distinguished name (DN) of an LDAP directory manager that the Anchore Enterprise Client can use to perform further queries about LDAP users during login. The directory manager is typically a privileged server administrator who, once authenticated, can access the LDAP record of any user intended to access the application.
Manager PasswordThe password associated with the Manager DN.
Base DNThe relative distinguished name in the LDAP directory tree hierarchy under which queries about users should be performed.

After you have entered the required connection properties, click the Save button to store them. Once stored, you can click the Test button to verify that the application can authenticate with the LDAP server using the details you’ve provided.

Note: Clicking Save when no values are provided in any of the fields will disable LDAP in the application and prevent LDAP from being displayed as an authentication option on the login screen.

User Mappings

LDAP user mappings contain search filters that unite the results of searches made against the data attributes of LDAP records with account information stored in Anchore Enterprise.

When an LDAP user submits their credentials on the login page, the first match encountered will provide Anchore Enterprise Client with an associated Anchore Enterprise account that is used to define the scope of what the user can see and do once they are fully authenticated.

If a match is detected, the submitted password is then validated against the one stored inside the matched LDAP record. If the password is correct and the associated Anchore Enterprise account is not suspended, the user will be successfully logged in. If no match is found or the password is incorrect, authentication will fail.

Adding a User Mapping

User mappings can be created by administrators from inside an account within the Accounts sidetab in the Configuration view, or from the LDAP sidetab in the area below the server connection properties form.

To add a new user mapping containing an LDAP search filter, click the Add New LDAP User Mapping button—or if no user mappings are currently defined, click the Let’s add one! button in the empty table.

You will be presented a dialog, similar to the one shown below, where you can provide an LDAP search filter:

LDAP Search Filters

The LDAP search filter in each mapping provides the criteria for associating that mapping with an Anchore Enterprise account. For example:

uid=$USERNAME

In the above example, the user mapping requires that the uid (user ID) attribute in an LDAP record matches the data represented by the $USERNAME token.

The =$USERNAME string is a required entry, and the actual value of the token resolves to whatever value the user enters in the Username field when they log in to Anchore Enterprise Client.

In Microsoft® Active Directory® (AD) implementations that support the LDAP protocol, the sAMAccountName attribute is the broad equivalent of uid:

sAMAccountName=$USERNAME

Note: The submitted value of $USERNAME should always correspond to an attribute with a unique value within the LDAP user record, or one that is unique in combination with other criteria. In Active Directory, the uniqueness of sAMAccountName is enforced, whereas this may not be true for uid (which is an optional attribute in AD).

Additional filter criteria beyond the user identity can be provided to assert granular control over user access. The following examples describe filters with narrower scope:

(&(cn=$USERNAME)(|(ou:dn:=Administrative)(ou:dn:=Management)))
(&(ou=devops)(uniqueMember=uid=$USERNAME,dc=example,dc=org))

A detailed summary of the syntax and formula of LDAP search filters is beyond the scope of this document, however RFC 1558 provides a comprehensive description of how these entries are structured.

Mapping Order

By default, mappings are evaluated in priority order, with new entries being stored at the lowest priority. It can be challenging to infer the exact order of all mappings when they are spread across multiple accounts, so the table listing all current mappings the LDAP sidetab shows the priority of every item and includes the account with which they are associated. Example:

alt text

From here you can move row entries to a higher or lower order of precedence by clicking down on a hotspot () and then dragging the row up or down the list.

The priority order of user mappings determines the order in which search filters are evaluated when a user logs in. The first mapping to successfully locate an LDAP user record that matches the $USERNAME and any other criteria in its search filter will be used to determine the Anchore Enterprise account association for that user.

Once a user is located, subsequent mapping entries will be ignored, regardless of (possibly narrower) specificity, as only priority order matters here.

Test Mapping Behavior

You can evaluate the behavior of your user mappings by entering $USERNAME data (for example, the uid of a user) in the Check $USERNAME Against LDAP Mappings search field.

If an LDAP record is located that matches the search filter criteria of a mapping, you’ll be informed of which mapping provided the match, the associated Anchore Enterprise user, and the distinguished name of the user whose LDAP record was returned.

Login With LDAP Credentials

If a set of valid LDAP server connection properties have been stored by an administrator, the LDAP authentication option is activated in the application login view, in addition to the Default option of authenticating against the user records stored in Anchore Enterprise:

The value entered in the Username field will be used by the application to populate the $USERNAME token when evaluating each user mapping. The value entered in the Password field will be used to authenticate the matched user with the LDAP directory server.

Once these operations have completed, and providing the account associated with the mapping is not disabled, the user will be logged in.

4.20.4 - SSO

Overview

Anchore Enterprise can be configured to support user login to the UI using identities from external identity providers that support SAML 2.0. Anchore never stores any credentials for the users, only their usernames and Anchore permissions. All UI access is gated through a user’s valid login into the identity provider. Anchore uses the external provider to verify username identity and initialize a username, account, and roles on first login for a new user. Once a user’s identity is initialized in Anchore, the Anchore administrator can manage user permissions by managing the roles associated with the user’s identity in Anchore itself.

Terms

SAML Terms:

  • Identity Provider (IDP) - The service that stores the identity database and provides identity and authentication services to Anchore.
  • Service Provider (SP) - The service providing resources to the end user, in this case, the Anchore Enterprise deployment.
  • Assertion Consumer Service (ACS) - The consumer of SAML assertions generated by the Identity Provider. For Anchore Enterprise, the UI proxies the SAML assertion to the Anchore Enterprise API service which consumes it, but the UI is the network endpoint the user’s browser interacts with.

Anchore Terms:

  • Native User - A user that exists in the Anchore DB and has login credentials (passwords).
  • SAML User - A user that exists in the Anchore DB only with a username and permissions, but no credentials. This prevents any username conflicts. SAML users will also be associated with a single Identity Provider. This prevents overlapping usernames from multiple Identity Providers gaining access to unexpected users or accounts.

How SAML integration works

When a user logs into the Anchore Enterprise UI, they can choose which Identity Provider to authenticate with. User credentials are never passed to Anchore Enterprise. Instead, other information about the user is passed from the Identity Provider to Anchore. Some information used by Anchore during login include the username, authenticating Identity Provider, associated account, and initial RBAC permissions.
After the initial login, RBAC permissions can be adjusted for this user directly by an Anchore administrator. This allows the Anchore administrator the ability to control access of Anchore users without having to gain access to the corporate IDP system.

Dynamic SAML User Provisioning

The first time a SAML User logs into Anchore, if the username is not found within the Anchore DB, a record will be automatically created for the user. If the user’s associated account is not found within the Anchore DB, an account record will also be automatically created at this time. This is often referred to as Just In Time Provisioning (JIT).

Explicit Creation of SAML Users

An Anchore administrator has the ability to create other users with administrator privileges. This includes Native and SAML Users. When creating a SAML Administrator User, the username and the Identity Provider’s name will be required. Upon SSO login by this new user, it will be associated with account admin and have all the permissions of an Anchore administrator.

A global configuration mode is also available if SSO is the preferred method of login, but the Anchore administrator would like explicit control over which users can gain access to Anchore Enterprise.

sso_require_existing_users: ${ANCHORE_SSO_REQUIRES_EXISTING_USERS}

When this configuration mode is set to true, any users who have permissions to create other users, will now have the ability to explicitly create SAML Users. As stated above, when creating a SAML User, the username and the Identity Provider’s name will be required. In addition, an RBAC role will also be needed for each SAML User creation. Upon SSO login by this new user, it will be associated with the account it was created in and have all the RBAC permissions provided for it. When this configuration mode is set to true, SSO logins are only permitted within Anchore for users who have existing SAML user records fround in Anchore DB.

When explicitly creating SAML Users, the account and RBAC role provided will take precedent over any default values or IDP Attributes which may be configured in the SAML Configuration described below. For more information, please see Mapping.

Note: Any users that have previously authenticated via SSO will continue to have access regardless of the configuration mode setting. If you wish to prevent future access when setting sso_require_existing_users to true, simply delete the user record in Anchore.

SSO Login Validation

During subsequent SSO logins, Anchore will find an existing user record in the Anchore DB. The following information will be validated:

  • The user record must be a SAML User. If the user was previously configured as a Native User and you want to convert it to a SAML User, simply delete the user record in Anchore and have the user log in again via SSO.
  • The user record must be authenticating from the same Identity Provider. If the user has been changed to authenticate via a different Identity Provider, simply delete the user record in Anchore and have the user log in again via SSO.

Configuration Overview

In order to use SAML SSO, the Anchore Enterprise deployment must:

  1. Have Oauth enabled. This is required so that Anchore can issue bearer tokens for subsequent API usage by the UI to the system APIs.

  2. Using hashed passwords is optional but highly recommended. See User Authentication for more information on configuring OAuth and hashed password storage.

  3. Be able to reach the IDP login URL from the user’s browser.

  4. Be able to reach the metadata XML endpoint in the IDP (if using url).

Configuration of SAML SSO is done via API/UI operations but requires configuration both in your Identity Provider and in Anchore.

In the IDP:

  • Must support HTTP Redirect binding
  • Should support signed assertions and signed documents
  • Must allow unsigned client requests from Anchore
  • Must allow unencrypted requests and responses

Anchore IDP Configuration Fields are as follows.

  • Name - The name to use for this configuration. It will be part of the UI’s /service/auth/sso/ route as well as the /saml/sso/ and /saml/login/ routes that are used to implement SSO.
  • Enabled - Whether auth via this configuration is allowed.
  • ACS HTTPS Port - HTTPS port if non-standard. If omitted or -1, 443 is used.
  • SP Entity ID - The identity for the Anchore system to present when requesting auth from the SAML IDP. This is typically a URL identifying the Anchore deployment.
  • ACS URL - The URL to which SAML Responses should be sent from the IDP. For UI usage, this should be the hostname and port of the UI deployment and a path: /service//sso/auth/{idp config name}.
  • Default Account - If set, this is the account that SSO users will be initialized to be members of upon sign-in the first time. This property or IDP Account Attribute must be set.
  • Default Role - The role that will be granted to new users on first sign-in. Use this setting to apply a consistent role to all users if you do not want that data imported from the IDP. This property or IDP Role Attribute must be set.
  • IDP Account Attribute - A SAML assertion attribute name from the SAML responses that Anchore will use to determine the account name to initialize the user into. If the account does not exist, it is created. For more information on the initialization process see Initializing User Identities below. This property or Default Account must be set.
  • IDP Username Attribute - A SAML assertion attribute name from the SAML responses that Anchore will use to determine the username for the anchore identity. This is optional and typically will not need to be set. If omitted, the SAML Subject is used and this should meet most needs.
  • IDP Metadata URL - URL to retrieve the IDP metadata xml from. This value is mutually exclusive with IDP Metadata XML, so only one of the two properties may be specified.
  • IDP Metadata XML - Raw XML for the IDP metadata. This value is mutually exclusive with IDP Metadata URL, so only one of the two properties may be specified.
  • IDP Role Attribute - The SAML assertion attribute from the SAML responses that Anchore will use to initialize a user’s roles. This may be a multi-value property. This property or Default Account must be set.
  • Require Signed Assertions - If true, require the individual assertions in the SAML response to be signed by the IDP.
  • Require Signed Response - If true, require the SAML response document to be signed.

Using SAML Attributes to Initialize Users and Account in Anchore

The properties of the user including the account it belongs to, the roles it has in that account as well as any other accounts the user has role access to are all initialized based on the combination of the Anchore IDP configuration and the SAML response presented by the IDP at the user’s first login.

See Mapping for more information on that process and how the configuration works.

Deleting SAML SSO Configuration

An Anchore administrator has the ability to create, modify, and delete the SAML Configuration. During deletion of the SAML Configuration, any user that was created with this Identity Provider, either dynamically or explicitly, will also be deleted.

4.20.4.1 - Mapping SSO Identities into Anchore

Overview of Mapping External Identities into Anchore

Anchore SSO support provides a way to keep users’ credentials centrally stored in a corporate Identity Provider and outside of the Anchore deployment. Anchore’s SSO approach uses the external identity store to authenticate a user and bootstrap its permissions using Anchore’s RBAC system. For each login, the user’s identity is verified against the external provider but once the identity is initialized inside the Anchore deployment, its access to resources is controlled by the Anchore Enterprise RBAC system. This approach allows Anchore admins to manage user access without having to require administrator access to a corporate or central IT identity system while still being able to leverage that system for defining identity and securing access credentials.

The identity mapping process has two distinct phases:

  1. Initial identity bootstrap - Occurs on the first login of a user to Anchore causing dynamic construction of an Anchore user record.
  2. Identity verification and assertion validation - Validates the administrators requirements against the external identity record on each login.

Defining the Username

By default, with SSO, the SAML Assertion’s “Subject” attribute is used to define the username. Using the subject is the right solution in most situations, but in extreme cases it may be necessary to provide the username in some other form via attributes in the SAML Response. This behavior can be configured by setting: idp_username_attribute in the SAML Configuration within Anchore. This should only be used when the subject either cannot be used due to it being a transient ID as configured by the IDP itself, or you want the username to map to some form other than the IDP’s username, email, or persistent ID.

If the idp_username_attribute is set to an attribute name, and that attribute is not found or has no value in the SAML Response presented during login, then that user login will be rejected by Anchore.

If idp_username_attribute is an empty string or null, then the SAML Response’s subject attribute is used as the username. This is the default behavior.

Defining the Account the User Belongs To

In Anchore, all users belong to an Account. When an SSO user logs into Anchore UI for the first time, the identity is initialized with the username (as defined above), but the account to which the user belongs is configurable via a separate pair of configuration properties in the SAML Configuration within Anchore. These configuration properties are mutually exclusive.

idp_account_attribute - If set in the SAML Configuration, this attribute must be found within the SAML Response during each login for every user. The attribute value received is the ‘account name’. It must also be valid. A valid value must be greater than three characters and must not be a reserved account name such as ‘admin’ or ‘anchore-system’ If the attribute is not found within the SAML Response or the value is not valid, the login is rejected.

default_account - If set in the SAML Configuration, it’s value is the account all users that login from this IDP will be assigned.

  • In both cases, on the initial login by the user, if the account does not already exist within Anchore, an external account with that name is created.

Defining the User’s Initial Roles

In Anchore, all users are allowed to have one or more Roles that describe a set of access permissions.
Roles are assigned to the user via a separate pair of configuration properties in the SAML Configuration within Anchore.
These configuration properties are mutually exclusive.

idp_role_attribute - If set in the SAML Configuration, the attribute must be found within the SAML Response during each login for every user. The attribute value received is one or more ‘role name’. The value must also be valid. If the attribute is not found within the SAML Response or the value is not valid, the login is rejected.

default_role - If set in the SAML Configuration, it’s value will be the single role set for all users that login with this IDP. During a user’s first login, this role will be set on the account during user identity initialization. On subsequent logins for this user, the value will be ignored.

Revoking SSO User Access

  1. Disable the Anchore account. Any user, SSO or otherwise, that is a member of a disabled account cannot log in or perform API operations.
  2. If using idp_account_attribute or idp_role_attribute, simply remove or zero that attribute at the IDP for that user or group. All affected users will no longer be able to log in to Anchore.

Changing the Anchore SAML Configuration

Initialization of identities and roles occurs on the user’s first login. Once initialized, the configuration must match the SAML Response presented during each login for the user to log in.

Thus, changes to the SAML Configuration within Anchore may affect subsequent logins for your users. For instance, if you change the SAML Configuration within Anchore to start using attributes instead of defaults, a user’s SAML Response will need to contain the same attributes. Failure to find the correct attribute(s) with valid values will prevent the user’s login.

Example SSO configurations

Anchore and an external Identity Provider

Here are examples for both Okta and KeyCloak that provide simple defaults and identity mappings.

Identity Mapping configuration

Below are example configuration values for a single SAML Configuration within Anchore to achieve different behavior:

  1. All SSO users in one account with the same read-write permissions:

    • default_account = ‘account’
    • default_role = ‘read-write’
  2. SSO users in accounts managed by the IDP property, for example a ‘groups’ property:

    • default_account = null
    • default_role = null
    • idp_account_attribute = "primary_group"
    • idp_role_attribute = "roles"
    • Will initialize the user with the roles specified in the ‘roles’ attribute for the accounts in the ‘primary_group’ attribute.
      • E.g. if ‘primary_group’ = [’testers’], and ‘roles’ = [‘read-only’] for a user, [email protected]
      • [email protected] will be initialized at login as username = [email protected] in account testers with role read-only
      • Because the account is from an attribute, [email protected] might have ‘primary_group’ = [‘security_engineers’] and thus be initialized in a different account in Anchore.

4.20.4.1.1 - KeyCloak SAML Example

Configuring SAML SSO for Anchore with KeyCloak

The JBoss KeyCloak system is a widely used and open-source identity management system that supports integration with applications via SAML and OpenID Connect. It also can operate as an identity broker between other providers such as LDAP or other SAML providers and applications that support SAML or OpenID Connect.

The following is an example of how to configure a new client entry in KeyCloak and configure Anchore to use it to permit UI login by KeyCloak users that are granted access via KeyCloak configuration.

Configuring KeyCloak

Anchore supports multiple IDP configurations, each given a name. For this example we’ll choose the name “keycloak” for our configuration. This is important as that name is used in several URL paths to ensure that the correct configuration is used for validating responses, so make sure you pick a name that is meaningful to your users (they will see it in the login screen) and also that is url friendly.

Some config choices and assumptions specifically for this example:

  1. Let’s assume that you are running Anchore Enterprise locally. Anchore Enterprise UI is available at: https://localhost:3000. Replace with the appropriate url as needed.
  2. We’re going to choose keycloak as the name of this saml/sso configuration within Anchore. This will identify the specific configuration and is used in urls.
  3. Based on that, the Single-SignOn URL for this deployment will be: https://localhost:3000/service/sso/auth/keycloak
  4. Our SP Entity ID will use the same url: http://localhost:3000/service/sso/auth/keycloak

Add a Client entry in KeyCloak

  1. See SAML Clients in KeyCloak documentation

  2. For this example, set the following values in “Add Client” screen (these are specific to the settings in this example described above):

    1. Client ID - http://localhost:3000/service/sso/auth/keycloak - This will be the SP Entity ID used in the Anchore configuration later
    2. Client Protocol: “saml”
    3. Client SAML Endpoint: “http://localhost:3000/service/sso/auth/keycloak”
  3. In the next screen, Client Settings

    Client Settings1

    1. Name - “Anchore Enterprise”. This is only used to display a friendly name to Keycloak users in the KeyCloak UI. Can use any name you like.

    2. Enabled - Select on

    3. Include Authn Statement - Select on

    4. Sign Documents - Select on

    5. Client Sign Authn Requests - Select Off

    6. Sign Assertions - Select off

    7. Encrypt Assertions - Select off

    8. Client Signature Required - Select off

    9. Force Post Binding - Select off. Anchore requires the HTTP Redirect Binding to work, so this setting must be off to enable that.

    10. Force Name ID Format - Select on

    11. Name ID Format - Select username or email (transient uses a generated UUID per login and persistent use the Keycloak user’s UUID)

    12. Root URL - Leave empty

    13. Valid Redirect URIs - Add http://localhost:3000/service/sso/auth/keycloak

    14. Base URL - Leave empty

    15. Master SAML Processing URL - http://localhost:3000/service/sso/auth/keycloak

    16. Fine Grain SAML Endpoint Configuration

      1. Assertion Consumer Service Redirect Binding URL - http://localhost:3000/service/sso/auth/keycloak
    17. Save the configuration

    Client Settings2

  4. Download the metadata xml to import into Anchore

    1. Select ‘Installation’ tab.
    2. Select Format
    • Keycloak <= 5.0.0
    1. Select Format Option - SAML Metadata IDPSSODescriptor Metadata XML
    • Keycloak 6.0.0+
    1. Select Format Option - Mod Auth Mellon files Mod Auth Mellon Files
    2. Unzip the downloaded .zip and locate idp-metadata.xml Metadata XML
    3. Download or copy the XML to save in the Anchore configuration

Configure Anchore Enterprise to use the KeyCloak

  1. You’ll need the following information from keycloak in order to configure the SAML record within Anchore:

  2. The name to use fo the configuration, in this example keycloak

  3. Metadata XML downloaded or copied from the previous section

  4. In the Anchore UI, create an SSO IDP Configuration:

  5. Login as admin

  6. Select “Configuration” Tab on the top

  7. Select “SSO” on the left-side menu

  8. Click “Let’s Add One” in the configuration listing

Anchore KeyCloak setup

  1. Enter the values:
    1. Name: “keycloak” - This is the name of the configuration and will be referenced in login and sso URLs, so we use the value chosen at the beginning of this example
    2. Enabled: True - This controls whether or not users will be able to login with this configuration. We’ll enable it for the example but can disable later if no longer needed.
    3. ACS HTTPS Port: -1 or 443 - This is the port to use for HTTPS to the ACS (Assertion Consumer Service, in this case the UI). It is only needed if you need to use a non-standard https port
    4. SP Entity ID: http://localhost:3000/service/sso/auth/keycloak (NOTE: this must match the Client ID you used for the Client in the KeyCloak setup
    5. ACS URL: http://localhost:3000/service/sso/auth/keycloak
    6. Default Account: keycloakusers for this example, but can be any account name (existing or not) that you’d like the users to be members of. See Mappings for more information on how this
    7. Default Role: read-write for this example so that the users have full access to the account to analyze images, setup policies, etc.
    8. IDP Metadata XML: Paste the downloaded or copied XML from KeyCloak in step 4.3 above
    9. Require Signed Assertions - Select off
    10. Require Signed Response - Select on
    11. Save the configuration

Anchore KeyCloak setup2

You should now see a ‘keycloak’ option in the login screen for the Anchore Enterprise UI. This will redirect users to login to the KeyCloak instance for their username/password and will create a new user in Anchore in the keycloakusers account with read-write role.

4.20.4.1.2 - Microsoft Entra ID SAML Example

Configuring SAML SSO for Anchore with Microsoft Entra ID (formerly Azure Active Directory)

Azure Enterprise Applications allow identities from an Azure Directory to be federated via Single Sign On (SSO) to other applications. In doing so Azure is acting as a SAML Identity Provider (IdP) that can be used with Anchore Enterprise as a SAML Service Provider (SP).

Configuring Azure

Anchore supports multiple IDP configurations, each given a name. For this example we’ll choose the name “azure” for our configuration. This is important as that name is used in several URL paths to ensure that the correct configuration is used for validating responses, so make sure you pick a name that is meaningful to your users (they will see it in the login screen) and also that is url friendly.

Some config choices and assumptions specifically for this example:

  1. Let’s assume that you are running Anchore Enterprise locally. Anchore Enterprise UI is available at: https://anchore.example.com. Replace with the appropriate url as needed.
  2. We’re going to choose azure as the name of this saml/sso configuration within Anchore. This will identify the specific configuration and is used in urls.
  3. Based on that, the Single-SignOn URL for this deployment will be: https://anchore.example.com/service/sso/auth/azure
  4. Our SP Entity ID will use the same url: https://anchore.example.com/service/sso/auth/azure

Azure Setup

  1. Create a generic Enterprise Application in Azure.

    Azure Settings1

    1. Navigate to the Azure portal: https://portal.azure.com

    2. Type “Enterprise Applications” into the search bar in the top middle of the screen and click on it.

    3. Click on Create your own application near the upper left.

  2. Select SAML as the single sign-on method.

    Azure Settings2

  3. From your new Azure AD Enterprise Application select Single sign-on.

    Azure Settings3a

  4. Select Edit Basic SAML Configuration (section 1).

    Azure Settings3b

  5. For both Identifier (Entity ID) & Reply URL (Assertion Consumer Service URL) enter the URL for your Anchore deployment along with: “/service/sso/auth/azure”. The word “azure” can be customized. This will become the name of the SSO configuration that users will see each time they login via SSO. Example URL: https://anchore.example.com/service/sso/auth/azure (Make note of the URL for upcoming steps) - Click Save when finished

    Azure Settings3c

  6. Copy the “App Federation Metadata Url” found in section 3. This will be used in the next section.

    Azure Settings4

  7. Configure Users and Groups that will be allowed to login to Anchore.

    Azure Settings8

    1. Select Users and groups under Manage.

    2. Click Add user/group.

    3. Click on the hyper-linked “None Selected”.

    4. Select/Check users and/or groups who will be granted access to login to Anchore.

    5. Press Select when finished.

Configure Anchore Enterprise to use the Azure Identity Provider

  1. Ensure you have a non-admin Account created in Anchore. In this example I am creating an account named “saml” and setting it as the default account to create users who use this SSO configuration but do not already exist in Anchore.

    Anchore non-admin account

  2. Login to Anchore and navigate to System, SSO and click on Add new SAML IDP.

    Azure Settings6

  3. Enter the values (Note that all values can be changed except Name):

    1. Under “Name” enter “azure” or whatever you choose to name the SSO integration.
    2. Enter the URL (https://anchore.example.com/service/sso/auth/azure in this example) under “SP Entity ID” and “ACS URL”. This URL was known as Entity ID and Reply URL in Azure.
    3. Select the account created or selected in step 1 for “Default Account”.
    4. Select the appropriate role for “Default Role”.
    5. Paste the Federation Metadata URL from the previous section into “IDP Metadata URL”.
    6. Uncheck “Require Signed Response?”.

    Azure Settings7

  4. Save the configuration, configuration is complete when you see a login with ‘azure’ option on the login screen. Users can now log in to your Anchore deployment using this Azure endpoint.

See: Mapping Users and Roles in SSO for more information on using the account and role defaults, IDP attribute values and understanding how identities are mapped into Anchore’s RBAC system.

Using Azure Groups with Anchore User Groups

User groups in Anchore can grant users access to multiple Anchore accounts with varying/selectable access levels in each.

In this example we have two teams within the organization named TeamA and TeamB. We will create a group in Azure for each team that will be sent to Anchore upon SAML login where each team will have their own Anchore account. We can optionally grant each team read-only access to the other teams account.

To map groups from Azure to Anchore:

  1. Create an account in Anchore for TeamA and TeamB.

    Create Anchore Organizational Accounts

  2. In Azure create or add desired groups to the “Users and Groups” section of your Anchore SSO Enterprise Application.

    Azure User and groups 1

  3. In Anchore create (a) user group(s) matching group names from Azure. Please note that the name of a user group cannot be changed but its contents can be changed at any time. For the sake of this example add both the TeamA and TeamB accounts. Grant the TeamA user group full control to the TeamA account and read-only to the TeamB account. When adding the user group for TeamB grant full-admin access to the TeamB account and read-only access to the TeamA account.

    Anchore add user groups

  4. In your Azure Enterprise Application under Single Sign On select edit on Attributes and Claims, section 2.

    Azure edit claims

  5. Click on Add a group claim.

    Azure Add group claim

  6. For “Which groups associated with the user should be returned in the claim” Select “Groups assigned to the application” Azure Configure group claim

    1. For Source attribute select “Cloud-only group display names”
    2. Expand Advanced Options
    3. Check “Customize the name of the group claim”
    4. Check “Emit groups as role claims”
  7. Edit your Anchore SAML Identity Provider and enter “http://schemas.microsoft.com/ws/2008/06/identity/claims/role" in the “IDP User Groups Attribute”. Under User Groups select TeamA and TeamB.

    Anchore configure IDP User Groups Attribute

At this point when logging in via azure SSO a user will either land in the Default Account and need to switch account from the drop down in the upper right or if an IDP Account Attribute was configured to initially put them in the desired account they would only need to switch accounts appropriately.

Anchore switch accounts

4.20.4.1.3 - Okta SAML Example

Configuring SAML SSO for Anchore with Okta

Some config choices and assumptions specifically for this example:

  1. Anchore UI endpoint: http://localhost:3000. Replace with the appropriate url as needed.
  2. IDP Config Name: okta. This will identify the specific configuration and is used in urls, and can be any url-safe string you’d like.
  3. The Single Sign-on URL (also called the Assertion Consumer Service/ACS URL) for this deployment will be: http://localhost:3000/service/sso/auth/okta. This is constructed with the UI endpoint and path /service/sso/auth/{IDP Config Name}
  4. Our SP Entity ID will use the same url: http://localhost:3000/service/sso/auth/okta. This could be different but for simplicity we use the same value.

Configure Okta: Add an Application

See Okta SAML config to see how to create a new application authorization server. The following steps are used during specific steps of that walk-thru

Example Setup Screen

  1. In step #6
    1. Single sign on URL, this is the URL Okta will direct users to. This must be a URL in the Anchore Enterprise UI based on the name you chose for the configuration. In our example: http://localhost:3000/service/sso/auth/okta
    2. Set the Use this for Recipient URL and Destination URL checkbox as well. a1. Set the Audience URI(SP Entity ID) to a URI that will identify the anchore installation. This can be the same as the single-sign-on URL for simplicity. We’ll need to enter this value later in the Anchore config as well.
    3. Leave Default RelayState empty
    4. Name ID format can be left “Unspecified” or set to an email or username format.
    5. Choose the application username that makes sense for your install. Anchore can support a regular username or email address for the usernames.
  2. In step #7, these attribute statements are not required but can be set. This is, however, where you can set additional attributes that you want Anchore to use to initialize the user’s Anchore account or permission set. Later in the SAML Configuration you can specify attributes that Anchore will look for to extract the account name and roles for initializing a user that doesn’t yet exist in Anchore.
  3. In step #9, be sure to copy the metadata URL link so you have that. Anchore will need that value.
    1. Right-click here and copy the link address: Okta Example Metadata The URL should be something like: https://<youraccount>.okta.com/app/<appid>/sso/saml/metadata
  4. Finish the setup and save the Application entry.
  5. Important: To allow Okta users to login to Anchore you need to assign the Okta user to this new Application entry. See Assign and unassign apps to users for information on this process.

Configure Anchore Enterprise to use the Okta Identity Provider

You’ll need the following information from okta to enter in the Anchore UI:

  • The name chosen for the configuration: okta in this case
  • Metadata XML URL (from “configuring okta” step 3.1 above)
  • The Single Sign-on/ACS URL described in Step 3

In the Anchore UI, create an SSO Idp Configuration:

  1. Login as admin
  2. Select “Configuration” Tab on the top
  3. Select “SSO” on the left-side menu
  4. Click “Let’s Add One” in the configuration listing

Settings1

And…

Settings2

  1. Enter the values:

    • Name: okta - This is the name of the configuration and will be referenced in login and sso URLs, so we use the value chosen at the beginning of this example
    • Enabled: True - This controls whether or not users will be able to login with this configuration. We’ll enable it for the example but can disable later if no longer needed.
    • ACS HTTPS Port: -1 or 443 - This is the port to use for HTTPS to the ACS (Assertion Consumer Service, in this case the UI). It is only needed if you need to use a non-standard https port
    • ACS URL: http://localhost:3000/service/sso/auth/okta
    • Default Account - The account to add all okta users to when they login, for this example we use oktausers
    • Default Role - The role to grant okta users when they login in initially. For this example, we use read-write, the standard user type that has most abilities except user management.
    • IDP Metadata URL - The url from “Configure Okta” step 3.1
    • Require Signed Assertions - Select On
    • Require Signed Response - Select On
  2. Save the configuration, configuration is complete when you see a login with ‘okta’ option on the login screen. Users can now log in to your Anchore deployment using this Okta endpoint.

See: Mapping Users and Roles in SSO for more information on using the account and role defaults, IDP attribute values and understanding how identities are mapped into Anchore’s RBAC system.

4.21.1 - User Management

Introduction

In this section you will learn how to create accounts, users, and role assignment with the Anchore Enterprise UI.

Assumptions

  • You have a running instance of Anchore Enterprise and access to the UI.
  • You have the appropriate permissions to create accounts, users, and roles. This means you are either a user in the admin account, or a user that already is a member of the account-users-admin role for your account.

For more information on accounts, users, roles, and permissions see: Role Based Access Control

  • After a successful login, navigate to the configuration tab on the main menu.

alt text

Creating Accounts

In order to create accounts, navigate to the accounts tab from inside the configuration view and select “Create New Account”.

Upon selection, a popup window will display asking for two items:

  • Account Name (required)
  • Email In the following example I’ve created a ‘security’ account:

alt text

Now that a group has been created, I can begin to add users to it.

Viewing Role Permissions

To view the permissions associated with a specific role using the UI, select an account, and navigate to the roles tab:

alt text

To view the members in the account assigned to a specific role, select the ‘View’ button on the right-hand side.

Creating Users and assigning Roles

Upon immediate creation of an account, there will, by default be zero users. To add users, select the edit button corresponding the account you would like to add users to. This will bring you to the account page, where you can add your first user by selecting the “Let’s add one!” button.

Upon selection, a popup window will display asking for three items:

  • Username (required)
  • Password (required)
  • Assign Role(s)
    • Note that you can assign more than one role to a user. For a normal user with full access to add, update, and evaluate images, we recommend assigning the read-write role. The other roles are for specific use-cases such as CI/CD automation, and read-only access for reporting. See: Role Based Access Control from more details on the roles and their capabilities.

In this case I’ve assigned three roles to the user:

alt text

Once ‘OK’ is selected, the user will be created and you will be able to edit or remove the user as needed.

Deleting and Disabling Accounts

In order to delete an account, disable the account by sliding the button under the ‘Active’ column for the corresponding account, then select the ‘Remove’ button on the right-hand side.

A few notes to keep in mind when deleting accounts:

  • The ‘admin’ account is locked and cannot be deleted.
  • Once deletion is in progress, all resources (users, images, automated tasks, etc) will start a garbage collection process and won’t be viewable. Although it will still be present in the list to prevent admins from adding an account with the same name.
  • Once deleted, an account and their associated resources can’t be recovered.

A couple notes on disabling accounts:

  • Disabling accounts is a way for administrators to freeze an account while still keeping any associated analysis info intact.
  • Any automated tasks associated with the disabled account will be frozen.

Switching Account Data Context

System administrator users are able to view another account’s data context using the dropdown located at the top-right:

alt text

Generating API Keys

Enterprise release 5.1 adds support for API keys for various operations. This is to facilitate use-cases where the user does not want to expose their main credentials e.g. integrations can switch to using API keys instead of username/password credentials.

In order to generate an API key, navigate to the Enterprise UI and click on the top right button and select ‘API Keys’:

alt text

Clicking ‘API Keys’ will present a dialog that lists your active, expired and revoked keys:

alt text

To create a new API key, click on the ‘Create New API Key’ and this will open another dialog where it asks you for relevant details for the API key:

alt text

You can specify the following fields:

  • Name: The name of your API key. It is mandatory and unique i.e. you cannot have two API keys with the same name.
  • Description: An optional text descriptor for your API key.
  • Expiry Date: An expiry date for your API key, you cannot specify a date in the past and it cannot exceed 365 days by default.

Click save to save your API key, the UI will display the output of the operation:

alt text

NOTE!: Make sure you copy the value that’s output, there is no way to get this key value back.

Revoking API keys

If there is a situation where you feel your API key has been compromised, you can revoke an active key. This prevents the key from being used for authentication. To revoke a key, click on the ‘Revoke’ button next to a key:

alt text

NOTE: Be careful revoking a key, this is an irreversible operation i.e. you cannot mark it active later.

The UI by default only displays active API keys, if you want to see your revoked and expired keys, check the toggle to ‘Show only active API keys’:

alt text

Managing API Keys as an Admin

As an account admin you can manage API keys for all users in the account you are admin in. A global admin can manage API keys across all accounts and all users.

To access the API keys as an admin, click on the ‘System’ icon and navigate to ‘Accounts’:

alt text

Click ‘Edit’ for the account you want to manage keys for and click on the ‘Tools’ button against the user you wish to manage keys for:

alt text

4.21.2 - Accounts and Users

System Initialization

When the system first initializes it creates a system service account (invisible to users) and a administrator account (admin) with a single administrator user (admin). The password for this user is set at bootstrap using a default value or an override available in the config.yaml on the catalog service (which is what initializes the db). There are two top-level keys in the config.yaml that control this bootstrap:

  • default_admin_password - To set the initial password (can be updated by using the API once the system is bootstrapped). Defaults to foobar if omitted or unset.

  • default_admin_email - To set the initial admin account email on bootstrap. Defaults to admin@myanchore if unset

Managing Accounts Using AnchoreCTL

These operations must be executed by a user in the admin account. These examples are executed from within the enterprise-api container if using the quickstart guide:

First, exec into the enterprise-api container, if using the quickstart docker compose. For other deployment types (eg. helm chart into kubernetes), execute these commands anywhere you have AnchoreCTL installed that can reach the external API endpoint for you deployment.

docker compose exec enterprise-api /bin/bash

Getting Account and User Information

To list all the currently present accounts in the system, perform the following command:

# anchorectl account list
 ✔ Fetched accounts
┌──────────┬────────────────────┬──────────┐
│ NAME     │ EMAIL              │ STATE    │
├──────────┼────────────────────┼──────────┤
│ admin    │ admin@myanchore    │ enabled  │
│ devteam1 │ [email protected] │ enabled  │
│ devteam2 │ [email protected] │ enabled  │
└──────────┴────────────────────┴──────────┘

To review the list of users for a specific account, issue the following:

# anchorectl user list --account devteam1
 ✔ Fetched users
┌───────────────┬──────────────────────┬──────────────────────┬────────┬────────┐
│ USERNAME      │ CREATED AT           │ LAST UPDATED         │ SOURCE │ TYPE   │
├───────────────┼──────────────────────┼──────────────────────┼────────┼────────┤
│ devteam1admin │ 2022-08-25T17:43:43Z │ 2022-08-25T17:43:43Z │        │ native │
└───────────────┴──────────────────────┴──────────────────────┴────────┴────────┘

Adding a New Account

To add a new account which, by default, will have no active credentials, issue the following command:

# anchorectl account add devteam1 --email [email protected]
 ✔ Added account
Name: devteam1
Email: [email protected]
State: enabled# 

Note that the email address is optional and can be omitted.

At this point the account exists but contains no users. To create a user with a password, see below in the Managing Users section.

Disabling Account

Disabling an account prevents any of that account’s users from being able to perform any actions in the system. It also disabled all asynchronous updates on resources in that account, effectively freezing the state of the account and all of its resources. Disabling an account is idempotent, if it is already disabled the operation has no effect. Accounts may be re-enabled after being disabled.

# anchorectl account disable devteam1
 ✔ Disabled account
State: disabled

Enabling an Account

To restore a disabled account to allow user operations and resource updates, simply enable it. This is idempotent, enabling an already enabled account has no effect.

# anchorectl account enable devteam1
 ✔ Enabled account
State: enabled

Deleting an Account

Deleting an account will synchronously delete all users and credentials for the account and transition the account to the deleting state. At this point the system will begin reaping all resources for the account. Once that reaping process is complete, the account record itself is deleted. An account must be in a disabled state prior to deletion. Failure to be in this state results in an error:

# anchorectl account delete devteam1
error: 1 error occurred:
	* unable to delete account:
{
  "detail": {
    "error_codes": []
  },
  "httpcode": 400,
  "message": "Invalid account state change requested. Cannot go from state enabled to state deleting"
}

So, first you must disable the account, as shown above. Once disabled:

# anchorectl account disable devteam1
 ✔ Disabled account
State: disabled

# anchorectl account delete devteam1
 ✔ Deleted account
No results

# anchorectl account get devteam1
 ✔ Fetched account
Name: devteam1
Email: [email protected]
State: deleting

Managing Users Using AnchoreCTL

Users exist within accounts, but usernames themselves are globally unique since they are used for authenticating api requests. User management can be performed by any user in the admin account in the default Anchore Enterprise configuration using the native authorizer. For more information on configuring other authorization plugins see: Authorization Plugins and Configuration.

Create User in a User-Type Account

To create a new user credential within a specified account, you can issue the following command. Note that the ‘role’ assigned will dictate the API/operation level permissions granted to this new user. See help output for a list of available roles, or for more information you can review roles and associated permissions via the Anchore Enterprise UI. In the following example, we’re granting the new user the ‘full-control’ role, which gives the credential full access to operations within the ‘devteam1’ account namespace.

# ANCHORECTL_USER_PASSWORD=devteam1adminp4ssw0rd anchorectl user add --account devteam1 devteam1admin --role full-control
 ✔ Added user                                                                                                                                                                                                                                                devteam1admin
Username: devteam1admin
Created At: 2022-08-25T17:50:18Z
Last Updated: 2022-08-25T17:50:18Z
Source:
Type: native

# anchorectl user list --account devteam1
 ✔ Fetched users
┌───────────────┬──────────────────────┬──────────────────────┬────────┬────────┐
│ USERNAME      │ CREATED AT           │ LAST UPDATED         │ SOURCE │ TYPE   │
├───────────────┼──────────────────────┼──────────────────────┼────────┼────────┤
│ devteam1admin │ 2022-08-25T17:50:18Z │ 2022-08-25T17:50:18Z │        │ native │
└───────────────┴──────────────────────┴──────────────────────┴────────┴────────┘

That user may now use the API:

# ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=devteam1adminp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 anchorectl user list
 ✔ Fetched users
┌───────────────┬──────────────────────┬──────────────────────┬────────┬────────┐
│ USERNAME      │ CREATED AT           │ LAST UPDATED         │ SOURCE │ TYPE   │
├───────────────┼──────────────────────┼──────────────────────┼────────┼────────┤
│ devteam1admin │ 2022-08-25T17:50:18Z │ 2022-08-25T17:50:18Z │        │ native │
└───────────────┴──────────────────────┴──────────────────────┴────────┴────────┘

Deleting a User

Using the admin credential, or a credential that has a user management role assigned for an account, you can delete a user with the following command. In this example, we’re using the admin credential to delete a user in the ‘devteam1’ account:

ANCHORECTL_USERNAME=admin ANCHORECTL_ACCOUNT=admin ANCHORECTL_PASSWORD=foobar anchorectl user delete devteam1admin --account devteam1
 ✔ Deleted user
No results

Updating a User Password

Note that only system admins can execute this for a different user/account.

As an admin, to reset another users credentials:

# ANCHORECTL_USER_PASSWORD=n3wp4ssw0rd anchorectl user set-password devteam1admin --account devteam1
 ✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T17:58:32Z

To update your own password:

# ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=existingp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 anchorectl user set-password devteam1admin
 ❖ Enter new user password  : ●●●●●●●●●●●
 ❖ Retype new user password : ●●●●●●●●●●●
 ✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T18:00:35Z

Or, to perform the operation fully-scripted, you can set the new password as an environment variable:

ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=existingp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 ANCHORECTL_USER_PASSWORD=n3wp4ssw0rd anchorectl user set-password devteam1admin
 ✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T18:01:19Z

4.21.3 - Data Account/Context Switching

Overview

Administrators and specially-entitled standard users are offered the ability to context switch between the image analysis data contexts of different accounts. This capability allows you to view the analysis data held inside a different account while still retaining your own user profile configuration.

When you switch data context, the data-oriented aspects of the application will change but the qualities specific to your original account—herein referred to as your actual account—remain the same. Administrators keep their original permission set and have full control within the switched account. The account availability and associated permission set for standard users is decided by the role configuration of their switching entitlement, and these roles can be additionally set to differ per account.

This feature allows users to gain insights into multiple datasets, can be used by administrators for troubleshooting purposes or to make ad-hoc modifications to the data-oriented aspects of any account, and provides standard users with an additional level and vector of access control.

This following sections in this document describes how to switch and reset data contexts—both as an administrator and as a standard user—and how administrators can assign this capability to standard users.

Administrative Users

Context switching as as an administrator is available without prior configuration, and only requires that an account other than your own be available. When you click the account button in the top-right of the screen you are presented with a menu that contains an entry called Switch Account Data Context, which will be enabled when one or more accounts other than your own are present.

Clicking this item displays a submenu that describes all currently available accounts—both active and disabled—into which you can switch context:

Your home account is represented by the label Actual. If an account is disabled, this is indicated by the label Disabled (note that only administrators can context switch into disabled accounts). The account category—administrator or standard user—is indicated by the user-type icon.

Your current data context is represented by an entry with an emphasized title and checkmark prefix. When you click an entry for a different account, the application view will switch to use the data provided by this new context. The account button and dropdown items are similarly updated to reflect this change:

You will also notice a change to the background color of the main view, which serves as a reminder that your current data context is now different than the one provided by your actual account. In addition, a button is now present on the navigation bar that allows you to immediately revert to your actual data context when clicked (you can of course also use the menu to do this):

In the above example, the analysis information now presented is exactly what a user of the standard account would see in their actual account. As an administrator, you are now free to browse and interact with this data, add tags or repositories for analysis, create policies etc., and there are no permission restrictions on any of these operations.

Note: only the analysis data context has switched, and this new state does not extend to application data items such a private registry configurations.

Standard Users

Non-administrative users can also switch context if this capability has been conferred upon them by an administrator.

When you add a new standard user (or modify an existing one) you can optionally associate them with one or more additional accounts, providing those accounts are not currently disabled. The Add a New User dialog, which is accessed from within the account editor in the Configuration > Accounts view, is shown below:

Note: If an account is currently active and available for addition, but is subsequently disabled, the standard user will not be able to switch into that account.

For each associated account you must also provide one or more RBAC roles that determine how the standard user can interact with that account after they have switched context:

For example, a user may have full-control within their actual account, but could be restricted to read-only operations after switching context. You can provide multiple different roles for different accounts, but you must provide at least one role per account association:

Account associations can also be removed by clicking the X adjacent to each role list, or by removing the labels directly from the Associate Account(s) dropdown control.

Once you are satisfied with the user configuration, click OK to create (or update) these associations. The standard user will now be able to switch account data context using the same procedure as the one described for administrators, presented earlier in this document.

4.21.4 - Role-Based Access Control

Overview

Anchore Enterprise includes support for using Role-Based Access Control (RBAC) to control the permissions that a specific user has to a specific set of resources in the system. This allows administrators to configure specific, limited permissions on user enabling limited access usage for things like CI/CD automation accounts, read-only users for browsing analysis output, or security team users that can modify policy but not change image analysis configuration.

Anchore Enterprise provides a predefined of roles. Please see table below for complete list.

The Enterprise UI contains an enumeration of the specific permissions granted to users that are members of each of the roles.

Roles, Users, and Accounts

Roles are applied within the existing account and user frameworks defined in Anchore Enterprise. Resources are still scoped to the account namespace and accounts provide full resource isolation (e.g. an image must be analyzed within an account to be referenced in that account). Roles allow users to be granted permissions in both the account to which they belong as well as external accounts to facilitate resource-sharing.

Terminology

User: An authenticated identity (a principal in rbac-speak).

Account: A resource namespace and user grouping that also defines an authorization domain to which permissions are applied.

Role: A named set of permissions.

Permission: An action to grant an operation on a set of resources.

Action: The operation to be executed, such as listImages, createRegistry, and updateUser.

Target: The resource to be operated on, such as an image digest.

Role Membership: Mapping a username to a role within a specific account. This confers the permissions defined by the role to resources in the specified account to the specified user. The user is not required to be a member of the account itself.

Constraints

  1. A user may be a member of a role within one or more accounts.
  2. A user may be a member of many roles, or no roles.
  3. There is no default role set on a user when a user is created. Membership must be explicitly set.
  4. Roles are immutable. The set of actions they grant is static.
  5. Creating and deleting accounts is only available to users in the admin account. The scope of accounts and roles is different than other resources because they are global. The authorization domain for those resources is not the account name but rather the global domain: system.

Role Summary and Permissions

RoleAllowed ActionsDescription
system-adminAll actions within all domains.Administrative control over all domains within the system. USE WITH EXTREME CAUTION
full-controlAll actions within a specific account domain.Full control over any account granted for. USE WITH EXTREME CAUTION
account-user-adminlistUsers, createUser, updateUser, deleteUser, listRoles, getRole, listRoleMembers, createRoleMember, deleteRoleMember, getAccount, listApiKeys, createApiKey, getApiKey, updateApiKey, deleteApiKeyManage account creation and addition of users to accounts.
account-viewerlistAccountsRole which can list all accounts on the system. This role is only available for use in the system domain. This role can only be conferred by a system administrator.
image-analyzerlistImages, getImage, createImage, getImageEvaluation, listEvents, getEvent, listSubscriptions, importImage, importSource, getSubscription, getAccount, listSources, getSource, importSource, getSourceEvaluation, listSubscriptions, updateSubscription, getSubscription, deleteSubscription, createSubscription, createArtifactRelationships, listArtifactRelationships, viewReports,Submit images for analysis, get results, but not change config. Intended for CI/CD systems and automation.
image-developerlistImages, getImage, listPolicies, getPolicy, listSubscriptions, getSubscription, listRegistries, getRegistry, getImageEvaluation, listFeeds, listServices, getService, listEvents, getEvent, listArchives, listArchiveTransitionRules, getArchiveTransitionRule, listArchivedImageAnalysis, getArchivedImageAnalysis, getArchiveTransitionRuleHistory, getAccount, listNotificationEndpoints, listNotificationEndpointConfigurations, getNotificationEndpointConfiguration, getActions, listAlerts, getAlert, getCorrection, getApplication, listSources, getSource, getSourceEvaluation, listArtifactRelationships,Permissions view images, vulnerabilities and policy evaluations.
image-lifecyclecreateArchivedImageAnalysis, createArchiveTransitionRule, deleteArchivedImageAnalysis, deleteArchiveTransitionRule, deleteArchiveTransitionRuleHistory, getArchivedImageAnalysis, getArchiveTransitionRule, getArchiveTransitionRuleHistory, listArchivedImageAnalysis, listArchives, listArchiveTransitionRules,Permissions to manage archives and archival rules.
inventory-agentsyncInventoryMinimal permissions for use with runtime inventory agents (k8s or ECS).
read-writecreateImage, createPolicy, createRegistry, createRepository, createSubscription, deleteEvents, deleteImage, deletePolicy, deleteRegistry, deleteSubscription, getAccount, getEvent, getImage, getImageEvaluation, getPolicy, getRegistry, getService, getSubscription, importImage, importSource, listEvents, listFeeds, listImages, listPolicies, listRegistries, listServices, listSubscriptions, updateFeeds, updatePolicy, updateRegistry, updateSubscription, listArchives, listArchiveTransitionRules, getArchiveTransitionRule, createArchiveTransitionRule, deleteArchiveTransitionRule, listArchivedImageAnalysis, getArchivedImageAnalysis, createArchivedImageAnalysis, deleteArchivedImageAnalysis, getArchiveTransitionRuleHistory, listNotificationEndpoints, listNotificationEndpointConfigurations, getNotificationEndpointConfiguration, createNotificationEndpointConfiguration, updateNotificationEndpointConfiguration, deleteNotificationEndpointConfiguration, listRuntimeInventories, getRuntimeInventory, createRuntimeInventory, syncInventory, deleteInventory, getActions, addAction, listAlerts, getAlert, createAlert, updateAlert, getCorrection, addCorrection, updateCorrection, deleteCorrection, createApplication, getApplication, deleteApplication, updateApplication, listSources, getSource, importSource, getSourceEvaluation, createArtifactRelationship, listArtifactRelationships, deleteArtifactRelationships, getArtifactRelationshipDiff, createScheduledQuery, updateScheduledQuery, executeScheduledQuery, deleteScheduledQuery, deleteScheduledQueryResult, viewReports, getKubernetesContainers, getKubernetesClusters, getKubernetesNamespaces, getKubernetesNodes, getKubernetesPods, getKubernetesVulnerabilities, listRuntimeInventories, getECSContainers, getECSServices, getECSTasksFull read-write permissions for regular account-level resources, excluding user/role management.
read-onlylistImages, getImage, listPolicies, getPolicy, listSubscriptions, getSubscription, listRegistries, getRegistry, getImageEvaluation, listFeeds, listServices, getService, listEvents, getEvent, listArchives, listArchiveTransitionRules, getArchiveTransitionRule, listArchivedImageAnalysis, getArchivedImageAnalysis, getArchiveTransitionRuleHistory, getAccount, listNotificationEndpoints, listNotificationEndpointConfigurations, getNotificationEndpointConfiguration, listRuntimeInventories, getRuntimeInventory, getActions, listAlerts, getAlert, getCorrection, getApplication, listSources, getSource, getSourceEvaluation, listArtifactRelationships, viewReports, getKubernetesContainers, getKubernetesClusters, getKubernetesNamespaces, getKubernetesNodes, getKubernetesPods, getKubernetesVulnerabilities, listRuntimeInventories, getECSContainers, getECSServices, getECSTasksRead only access to account resources, but includes policy evaluation permission.
policy-editorlistImages, listSubscriptions, listPolicies, getImage, getPolicy, getImageEvaluation, createPolicy, updatePolicy, deletePolicy, getAccount, getCorrection, listSources, getSource, getSourceEvaluation, viewReports,Edit policies, get evaluations of images, intended for users to set policies but not change the scanning configurations.
repo-analyzercreateRepository, updateSubscription (specifically for activation of type repo_update)Permission to allow analysis of repositories.
report-adminlistImages, createScheduledQuery, updateScheduledQuery, executeScheduledQuery, deleteScheduledQuery, deleteScheduledQueryResult, viewReports,Permissions to administer reports and schedules.
registry-editorcreateRegistry, deleteRegistry, getRegistry, listRegistries, updateRegistry,Permissions to manage registry credentials.

Note: All account scoped roles have these roles implicitly granted as well: selfListApiKeys, selfCreateApiKey, selfUpdateApiKey, selfDeleteApiKey, selfGetApiKey, selfGetCredentials, selfAddCredential, selfDeleteCredential

Granting Cross-Account Access

The Anchore API supports a specific mechanism for allowing a user to make requests in another account’s namespace, the x-anchore-account header. By including x-anchore-account: "desiredaccount" on a request, a user can attempt that request in the namespace of the other account. This is subject to full authorization and RBAC.

To grant a username the ability to execute operations in another account, simply make the username a member of a role in the desired account. This can be accomplished in the UI or via API against the RBAC Manager service endpoint. For example, using curl:

curl -u admin:foobar -X POST -H "Content-Type: application/json" -d '{"username": "someuser", "for_account": "some_other_account"}' http://localhost:8229/roles/policy-editor/members

This should be done with caution as there is currently no support for resource-specific access controls. A user will have the permitted actions on all resources in the other account (based on the permissions of the role). For example, making a user a member of policy-editor role for another account will enable full ability to create, delete, and update that account’s policy bundles.

WARNING: Because roles do not currently provide custom target/resource definitions, assigning users to the Account User Admin role for an account other than their own is dangerous because there are no guards against that user then removing permissions of the granting user (unless that user is an ‘admin’ account user), so use with extreme caution.

NOTE: admin account users are not subject to RBAC constraints and therefore always have full access to add/remove users to roles in any account. So, while it is possible to grant them role membership, the result is a no-op and does not change the permissions of the user in any way.

4.21.4.1 - User Groups

Overview

User groups are abstractions that allow an administrator to manage permissions for users across the system without having to manage each individual user’s permissions.

Administrators simply have to create a user group, define roles per accounts within the user group and then associate users with it. Users can be associated with multiple user-groups. Each user inherits roles from their user group as well as any explicitly defined roles.

Users can be explicitly added to a User Group (as described above) or SAML users can have an indirect membership of a user group based on their IDP associations.

Note: User Group management is strictly limited to admin users only.

Terminology

  • User Group: A basic resource that grants roles and permissions to users on various accounts
        "name": "user-group-engineers",
        "description": "The group permissions for all engineers",
  • User Group Roles: A collection of roles associated with a user group, this can span multiple accounts and have multiple roles per account. E.g.
    [  
        {Account: "devs_account",    Roles: [“policy-editor”,”image-analyzer”]},
        {Account: "devops_account",  Roles: [“read-write”]},
        {Account: "preview_account", Roles: [“read-only”]}
    ]
  • IDP User Group Mappings: A set of User Groups that are mapped to a single Identity provider. E.g.
    {
        IDP Name: "keycloak", 
        User Groups: [“user-group-engineers”, ”user-group-devsec”, ”user-group-auditors”]}
  • User Group Native User Member: A native user who has been explicitly associated with a User Group. This user inherits all roles from the User Group in addition to any roles assigned directly to this user.
  • User Group IDP Member: An SAML user who is an indirect member of a User Group. As the SAML user authenticates, the IDP’s User Group Mappings are used to determine if this user should be associated with a User Group.

Native users

Native users are users that are defined in Anchore Enterprise and do not authenticate using an external SSO endpoint. These users can be added to User Groups directly and inherit roles from the User Groups they are members of.

SAML(SSO) users

SAML users are users that authenticate using an external SAML IDP. These users can be associated with User Groups based on their group memberships in the SAML IDP.

SAML users are automatically added to a User Group based on their group memberships in the SAML IDP and the IDP’s User Group associations.

User Group management

User Groups can be managed from the Anchore Enterprise UI or using the Anchore Enterprise API.

AnchoreCTL

User Groups can be managed using the anchorectl CLI tool. The following commands are available for User Group management:

  • To create a new User Group, use the following command:
# anchorectl usergroup add development --description "The development team"
 ✔ Added usergroup                                                                                                                                                                                                       
Name: development
Description: The development team
Group Uuid: 4a5d8357-1fc3-44cf-8a1c-9882406df656
Created At: 2024-03-20T15:57:20.086665Z
Last Updated: 2024-03-20T15:57:20.086669Z
Account Roles:
  Items: 
  • To list all User Group, use the following command:
# anchorectl usergroup list
┌─────────────┬──────────────────────┬──────────────────────────────────────┐
│ NAME        │ DESCRIPTION          │ GROUP UUID                           │
├─────────────┼──────────────────────┼──────────────────────────────────────┤
│ development │ The development team │ 4a5d8357-1fc3-44cf-8a1c-9882406df656 │
└─────────────┴──────────────────────┴──────────────────────────────────────┘
  • To edit the description of a User Group, use the following command:
# anchorectl usergroup update development --description "New development team description"
 ✔ Update usergroup                                                                                                                                                                                                      
Name: development
Description: New development team description
Group Uuid: 4a5d8357-1fc3-44cf-8a1c-9882406df656
Created At: 2024-03-20T15:57:20.086665Z
Last Updated: 2024-03-20T16:00:17.989822Z
Account Roles:
  Items: 
  • To delete a User Group, use the following command:
# anchorectl usergroup delete development
 ✔ Deleted usergroup                                                                                                                                                                                                     
No results                                                                                                                                                                                                    
  • To add an account role to a User Group, use the following command:
# anchorectl usergroup role add development dev_account --role image-analyzer,image-developer,read-only,repo-analyzer
 ✔ Added account and role(s)                                                                                                                                                                                             
┌────────────────┬───────────────────────────────────────────────────────────┐
│ ACCOUNT/DOMAIN │ ROLES                                                     │
├────────────────┼───────────────────────────────────────────────────────────┤
│ dev_account    │ image-analyzer, image-developer, read-only, repo-analyzer │
└────────────────┴───────────────────────────────────────────────────────────┘

# anchorectl usergroup role add development devops_account --role read-only                                                
 ✔ Added account and role(s)                                                                                                                                                                                             
┌────────────────┬───────────────────────────────────────────────────────────┐
│ ACCOUNT/DOMAIN │ ROLES                                                     │
├────────────────┼───────────────────────────────────────────────────────────┤
│ dev_account    │ image-analyzer, image-developer, read-only, repo-analyzer │
│ devops_account │ read-only                                                 │
└────────────────┴───────────────────────────────────────────────────────────┘
  • To list all account roles for a User Group, use the following command:
# anchorectl usergroup role list development                                                                               
 ✔ Fetched usergroups accounts and roles                                                                                                                                                                                 
┌────────────────┬───────────────────────────────────────────────────────────┐
│ ACCOUNT/DOMAIN │ ROLES                                                     │
├────────────────┼───────────────────────────────────────────────────────────┤
│ dev_account    │ image-analyzer, image-developer, read-only, repo-analyzer │
│ devops_account │ read-only                                                 │
└────────────────┴───────────────────────────────────────────────────────────┘
  • To remove account role(s) from a User Group, use the following command:
# anchorectl usergroup role delete development dev_account --role image-analyzer,image-developer 
 ✔ Deleted role                                                                                                                                                                                                          
No results
  • To add a native user to a User Group, use the following command:
# anchorectl usergroup user add development -u dev_user
 ✔ Added user(s)                                                                                                                                                                                                         
┌──────────┬─────────────────────────────┐
│ USERNAME │ ADDED TO USER GROUP ON      │
├──────────┼─────────────────────────────┤
│ dev_user │ 2024-03-20T16:30:20.092909Z │
└──────────┴─────────────────────────────┘
  • To list all members of a User Group, use the following command:
# anchorectl usergroup user list development
 ✔ Fetched users within usergroup                                                                                                                                                                                        
┌──────────┬─────────────────────────────┐
│ USERNAME │ ADDED TO USER GROUP ON      │
├──────────┼─────────────────────────────┤
│ dev_user │ 2024-03-20T16:30:20.092909Z │
└──────────┴─────────────────────────────┘
  • To remove a native user from a User Group, use the following command:
# anchorectl usergroup user delete development -u dev_user
 ✔ Deleted user(s)                                                                                                                                                                                                       
No results

4.22 - Windows Container Scanning

Anchore can analyze and provide vulnerability matches for Microsoft Windows images. Anchore downloads, unpacks, and analyzes the Microsoft Windows image contents similar to Linux-based images, providing OS information as well as discovered application packages like npms, gems, python, NuGet, and java archives.

Vulnerabilities for Microsoft Windows images are matched against the detected operating system version and KBs installed in the image. These are matched using data from the Microsoft Security Research Center (MSRC) data API.

Supported Windows Base Image Versions

The following are the MSRC Product IDs that Anchore can detect and provide vulnerability information for. These provide the basis for the main variants of the base Windows containers: Windows, ServerCore, NanoSerer, and IoTCore

Product IDName
10951Windows 10 Version 1703 for 32-bit Systems
10952Windows 10 Version 1703 for x64-based Systems
10729Windows 10 for 32-bit Systems
10735Windows 10 for x64-based Systems
10789Windows 10 Version 1511 for 32-bit Systems
10788Windows 10 Version 1511 for x64-based Systems
10852Windows 10 Version 1607 for 32-bit Systems
10853Windows 10 Version 1607 for x64-based Systems
11497Windows 10 Version 1803 for 32-bit Systems
11498Windows 10 Version 1803 for x64-based Systems
11563Windows 10 Version 1803 for ARM64-based Systems
11568Windows 10 Version 1809 for 32-bit Systems
11569Windows 10 Version 1809 for x64-based Systems
11570Windows 10 Version 1809 for ARM64-based Systems
11453Windows 10 Version 1709 for 32-bit Systems
11454Windows 10 Version 1709 for x64-based Systems
11583Windows 10 Version 1709 for ARM64-based Systems
11644Windows 10 Version 1903 for 32-bit Systems
11645Windows 10 Version 1903 for x64-based Systems
11646Windows 10 Version 1903 for ARM64-based Systems
11712Windows 10 Version 1909 for 32-bit Systems
11713Windows 10 Version 1909 for x64-based Systems
11714Windows 10 Version 1909 for ARM64-based Systems
10379Windows Server 2012 (Server Core installation)
10543Windows Server 2012 R2 (Server Core installation)
10816Windows Server 2016
11571Windows Server 2019
10855Windows Server 2016 (Server Core installation)
11572Windows Server 2019 (Server Core installation)
11499Windows Server, version 1803 (Server Core Installation)
11466Windows Server, version 1709 (Server Core Installation)
11647Windows Server, version 1903 (Server Core installation)
11715Windows Server, version 1909 (Server Core installation)

Windows Operating System Packages

Just as Linux images are scanned for packages such as RPMs, DPKG, and APK, Windows images are scanned for the installed components and Knowledge Base patches (KBs). When listing operating system content on a Microsoft Windows image, the results returned are KB identifiers that are numeric. Both the name and version will be identical and are the KB IDs.

5 - Monitoring

After you have installed Anchore Enterprise, there are various ways to monitor its operations:

5.1 - System Health

Overview

Added in Anchore Enterprise 2.2, the Health section within the System tab is an administrator’s new display for investigating the operational status of their system’s various services and feeds. Leverage this view to understand when your system is ready or if it requires intervention.

The following sections in this document describe how to determine system readiness, the state of your services, and the progression of your feed sync.

For more information on the overall architecture of a full Anchore Enterprise deployment, please refer to the Architecture documentation. Or refer to the Feeds Overview if you’re interested in the feeds-side of things.

System Readiness

Ready

(Tentatively) Ready

Not Ready

The indicator for system readiness can be seen from any screen by viewing the System tab header:

The system readiness status relies on the service and feed data which are routinely updated every 5 minutes. Using the example indicator provided above, once all the feed groups are successfully synced, the status icon will turn green.

For up-to-date information outside of the normal update cycle, navigate to the Health section within the System tab and click on Refresh Service Health, Refresh Feed Data, or manually refresh the page.

Services

As shown above and as of 2.2, there are five services required by the system to function (API, Catalog, Policy Engine, SimpleQueue, and Analyzer).

For every service, the Base URL, Host ID, and Version is displayed. As long as one instance of each service is up and available, the main system is regarded as ready. In the example image provided above, we see that we have multiple instances of the Policy Engine and Analyzer services.

For the full, filterable list of instances for that service, click on the numbers provided. In the case of the Policy Engine, that would be the 1/2 Available.

Note that orphaned services are filtered out by default in this view (with a toggle to include it again) but will still impact the availability count on the main page.

In the case of service errors, they are logged within the Events & Notifications tab so we recommend following up there for more information or browse our Troubleshooting documentation for remediation guidance.

Feeds Sync

Listed in this section are the various feed groups your system relies on for vulnerability and package data. This data comes from a variety of upstream sources which is vital for policy engine operations such as evaluating policies or listing vulnerabilities.

As shown, you can keep track of your sync progression using the Last Sync column. To manually update the feed data displayed outside of its normal 5-minute cycle, click the Refresh Feed Data button or refresh the page.

If you’d rather have them grouped by feed rather than listed out individually, you can toggle the layout from list to cards using the buttons in the top-right corner above the table:

Similar to the service cards, if you decide to have them grouped as we show below using the layout buttons, you can click on the number of groups synced to view the full, filterable list within.

When viewing a list of feed groups - whether through the default list or through a specific feed card - you can filter for a specific value using the input provided or click on the button attached to filter by category. In this case, groups can be filtered by whether they are synced or unsynced.

In the case of feed sync errors, they are logged within the Events & Notifications tab so we recommend following up there for more information or browse our Troubleshooting documentation for remediation guidance.

Or if you’re interested in an overview of the various drivers Enterprise Feeds uses, check out our Feeds Overview.

5.2 - Prometheus

Anchore Enterprise exposes prometheus metrics in the API of each service if the config.yaml that is used by that service has the metrics.enabled key set to true.

Each service exports its own metrics and is typically scraped by a Prometheus installation to gather the metrics. Anchore does not aggregate or distribute metrics between services. You should configure your Prometheus deployment or integration to check each Anchore service’s API using the same port it exports for the /metrics route.

Monitoring in Kubernetes and/or Helm Chart

Prometheus is very commonly used for monitoring Kubernetes clusters. Prometheus is supported by core Kubernetes services. There are many guides on using Prometheus to monitor a cluster and services deployed within, and also many other monitoring systems can consume Prometheus metrics.

The Anchore Helm Chart includes a quick way to enable the Prometheus metrics on each service container:

  • Set: helm install --name myanchore anchore/anchore-engine --set anchoreGlobal.enableMetrics=true

  • Or, set it directly in your customized values.yaml

The specific strategy for monitoring services with prometheus is outside the scope of this document. But, because Anchore exposes metrics on the /metrics route of all service ports, it should be compatible with most monitoring approaches (daemon sets, side-cars, etc).

Metrics of Note

Anchore services export a range of metrics. The following list shows some Anchore services that can help you determine the health and load of an Anchore deployment.

  • anchore_queue_length, specifically for queuename: “images_to_analyze”
    • This is the number of images pending analysis, in the not_analyzed state.
    • As this number grows you can expect longer analysis times.
    • Adding more analyzers to a system can help drain the queue faster and keep wait times to a minimum.
    • Example: anchore_queue_length{instance=“engine-simpleq:8228”,job=“anchore-simplequeue”,queuename=“images_to_analyze”}.
    • This metric is exported from all simplequeue service instances, but is based on the database state, so they should all present a consistent view of the length of the queue.
  • anchore_monitor_runtime_seconds_count
    • These metrics, one for each monitor, record the duration of the async processes as they execute on a duty cycle.
    • As the system grows, these will become longer to account for more tags to check for updates, repos to scan for new tags, and user notifications to process.
  • anchore_tmpspace_available_bytes
    • This metric tracks the available space in the “tmp_dir” location for each container. This is most important for the instances that are analyzers where this can indicate how much disk is being used for analysis and how much overhead there is for analyzing large images.
    • This is expected to be consumed in cycles, with usage growing during analysis and then flushing upon completion. A consistent growth pattern here may indicate left over artifacts from analysis failures or a large layer_cache setting that is not yet full. The layer cache (see Layer Caching) is located in this space and thus will affect the metric.
  • process_resident_memory_bytes
    • This is the memory actually consumed by the instance, where each instance is a service process of Anchore. Anchore is fairly memory intensive for large images and in deployments with lots of analyzed images due to lots of json parsing and marshalling, so monitoring this metric will help inform capacity requirements for different components based on your specific workloads. Lots of variables affect memory usage, so while we give recommendations in the Capacity Planning document, there is no substitute for profiling and monitoring your usage carefully.

5.3 - Event Log

Introduction

The event log subsystem provides the users with a mechanism to inspect asynchronous events occurring across various Anchore Enterprise services. Anchore events include periodically triggered activities such as vulnerability data feed syncs in the policy-engine service, image analysis failures originating from the analyzer service, and other informational or system fault events. The catalog service may also generate events for any repositories or image tags that are being watched, when the engine encounters connectivity, authentication, authorization or other errors in the process of checking for updates. The event log is aimed at troubleshooting most common failure scenarios (especially those that happen during asynchronous engine operations) and to pinpoint the reasons for failures, that can be used subsequently to help with corrective actions. Events can be cleared from anchore-engine in bulk or individually.

The Anchore events (drawn from the event log) can be accessed through the Anchore Enterprise API and AnchoreCTL, or can be emitted as webhooks if your Anchore Enterprise is configured to send webhook notifications. For API usage refer to the document on using the Anchore Enterprise API.

Accessing Events

The anchorectl command can be used to list events and filter through the results, get the details for a specific event and delete events matching certain criteria.

# anchorectl event --help
Event related operations

Usage:
   event [command]

Available Commands:
  delete      Delete an event by its ID or set of filters
  get         Lookup an event by its event ID
  list        Returns a paginated list of events in the descending order of their occurrence

Flags:
  -h, --help   help for event

Use " event [command] --help" for more information about a command.

For help regarding global flags, run --help on the root command

For a list of the most recent events:


anchorectl event list
 ✔ List events
┌──────────────────────────────────┬──────────────────────────────────────────────┬───────┬─────────────────────────────────────────────────────────────────────────┬─────────────────┬────────────────┬────────────────────┬─────────────────────────────┐
│ UUID                             │ EVENT TYPE                                   │ LEVEL │ RESOURCE ID                                                             │ RESOURCE TYPE   │ SOURCE SERVICE │ SOURCE HOST        │ TIMESTAMP                   │
├──────────────────────────────────┼──────────────────────────────────────────────┼───────┼─────────────────────────────────────────────────────────────────────────┼─────────────────┼────────────────┼────────────────────┼─────────────────────────────┤
│ 8c179a3b27a543fe9285cf4feb65561d │ system.image_analysis.registry_lookup_failed │ error │ docker.io/alpine:3.4                                                    │ image_reference │ catalog        │ anchore-quickstart │ 2022-08-24T23:08:30.54001Z  │
│ 48c18a84575d45efbf5b41e0f3a87177 │ system.image_analysis.registry_lookup_failed │ error │ docker.io/alpine:latest                                                 │ image_reference │ catalog        │ anchore-quickstart │ 2022-08-24T23:08:30.510193Z │
│ f6084efd159c43a1a0518b6df5e58505 │ system.image_analysis.registry_lookup_failed │ error │ docker.io/alpine:3.12                                                   │ image_reference │ catalog        │ anchore-quickstart │ 2022-08-24T23:08:30.480625Z │
│ 4464b8f83df046388152067122c03610 │ system.image_analysis.registry_lookup_failed │ error │ docker.io/alpine:3.8                                                    │ image_reference │ catalog        │ anchore-quickstart │ 2022-08-24T23:08:30.450983Z │
...
│ 60f14821ff1d407199bc0bde62f537df │ system.image_analysis.restored_from_archive  │ info  │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest    │ catalog        │ anchore-quickstart │ 2022-08-24T22:53:12.662535Z │
│ cd749a99dca8493889391ae549d1bbc7 │ system.analysis_archive.image_archived       │ info  │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest    │ catalog        │ anchore-quickstart │ 2022-08-24T22:48:45.719941Z │
...
└──────────────────────────────────┴──────────────────────────────────────────────┴───────┴─────────────────────────────────────────────────────────────────────────┴─────────────────┴────────────────┴────────────────────┴─────────────────────────────┘

Note: Events are ordered by the timestamp of their occurrence, the most recent events are at the top of the list and the least recent events at the bottom.

There are a number of ways to filter the event list output (see anchorectl event list --help for filter options):

For troubleshooting events related to a specific event type:

# anchorectl event list --event-type system.analysis_archive.image_archive_failed
 ✔ List events
┌──────────────────────────────────┬──────────────────────────────────────────────┬───────┬──────────────┬───────────────┬────────────────┬────────────────────┬────────────────────────────┐
│ UUID                             │ EVENT TYPE                                   │ LEVEL │ RESOURCE ID  │ RESOURCE TYPE │ SOURCE SERVICE │ SOURCE HOST        │ TIMESTAMP                  │
├──────────────────────────────────┼──────────────────────────────────────────────┼───────┼──────────────┼───────────────┼────────────────┼────────────────────┼────────────────────────────┤
│ 35114639be6c43a6b79d1e0fef71338a │ system.analysis_archive.image_archive_failed │ error │ nginx:latest │ image_digest  │ catalog        │ anchore-quickstart │ 2022-08-24T22:48:23.18113Z │
└──────────────────────────────────┴──────────────────────────────────────────────┴───────┴──────────────┴───────────────┴────────────────┴────────────────────┴────────────────────────────┘

To filter events by level such as ERROR or INFO:

anchorectl event list --level info
 ✔ List events
┌──────────────────────────────────┬─────────────────────────────────────────────┬───────┬─────────────────────────────────────────────────────────────────────────┬───────────────┬────────────────┬────────────────────┬─────────────────────────────┐
│ UUID                             │ EVENT TYPE                                  │ LEVEL │ RESOURCE ID                                                             │ RESOURCE TYPE │ SOURCE SERVICE │ SOURCE HOST        │ TIMESTAMP                   │
├──────────────────────────────────┼─────────────────────────────────────────────┼───────┼─────────────────────────────────────────────────────────────────────────┼───────────────┼────────────────┼────────────────────┼─────────────────────────────┤
│ 60f14821ff1d407199bc0bde62f537df │ system.image_analysis.restored_from_archive │ info  │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest  │ catalog        │ anchore-quickstart │ 2022-08-24T22:53:12.662535Z │
│ cd749a99dca8493889391ae549d1bbc7 │ system.analysis_archive.image_archived      │ info  │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest  │ catalog        │ anchore-quickstart │ 2022-08-24T22:48:45.719941Z │
...

Note: Event listing response is paginated, anchorectl displays the first 100 events matching the filters. For all the results use the –all flag.

All available options for listing events:


# anchorectl event list --help
Returns a paginated list of events in the descending order of their occurrence. Optional query parameters may be used for filtering results

Usage:
   event list [flags]

Flags:
      --all                    return all events (env: ANCHORECTL_EVENT_ALL)
      --before string          return events that occurred before the ISO8601 formatted UTC timestamp
                               (env: ANCHORECTL_EVENT_BEFORE)
      --event-type string      filter events by a prefix match on the event type (e.g. "user.image.")
                               (env: ANCHORECTL_EVENT_TYPE)
  -h, --help                   help for list
      --host string            filter events by the originating host ID (env: ANCHORECTL_EVENT_SOURCE_HOST_ID)
      --level string           filter events by the level - INFO or ERROR (env: ANCHORECTL_EVENT_LEVEL)
  -o, --output string          the format to show the results (allowable: [text json json-raw id]; env: ANCHORECTL_FORMAT) (default "text")
      --page int32             return the nth page of results starting from 1. Defaults to first page if left empty
                               (env: ANCHORECTL_PAGE)
      --resource-type string   filter events by the type of resource - tag, imageDigest, repository etc
                               (env: ANCHORECTL_EVENT_RESOURCE_TYPE)
      --service string         filter events by the originating service (env: ANCHORECTL_EVENT_SOURCE_SERVICE_NAME)
      --since string           return events that occurred after the ISO8601 formatted UTC timestamp
                               (env: ANCHORECTL_EVENT_SINCE)

For help regarding global flags, run --help on the root command

Event listing displays a brief summary of the event, to get more detailed information about the event such as the host where the event has occurred or the underlying the error:


# anchorectl event get c31eb023c67a4c9e95278473a026970c
 ✔ Fetched event
UUID: c31eb023c67a4c9e95278473a026970c
Event:
  Event Type: system.image_analysis.registry_lookup_failed
  Level: error
  Message: Referenced image not found in registry
  Resource:
    Resource ID: docker.io/aerospike:latest
    Resource Type: image_reference
    User Id: admin
  Source:
    Source Service: catalog
    Base Url: http://catalog:8228
    Source Host: anchore-quickstart
    Request Id:
  Timestamp: 2022-08-24T22:08:28.811441Z
  Category:
  Details: cannot fetch image digest/manifest from registry
Created At: 2022-08-24T22:08:28.812749Z

Clearing Events

Events can be cleared/deleted from the system in bulk or individually. Bulk deletion allows for specifying filters to clear the events within a certain time window. To delete all events from the system:

# anchorectl event delete --all
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure you want to delete all events:
  ▸ Yes
    No
 ⠙ Deleting event
c31eb023c67a4c9e95278473a026970c
329ff24aa77549458e2656f1a6f4c98f
649ba60033284b87b6e3e7ab8de51e48
4010f105cf264be6839c7e8ca1a0c46e
...

Delete events before a specified timestamp (can also use --since instead of --before to delete events that were generated after a specified timestamp):

# anchorectl event delete --before 2022-08-24T22:08:28.629543Z
 ✔ Deleted event
ce26f1fa1baf4adf803d35c86d7040b7
081394b6e62f4708a10e521a960c54d7
d21b587dea5844cc9c330ba2b3d02d2e
7784457e6bf84427a175658f134f3d6a
...

Delete a specific event:

# anchorectl event delete fa110d517d2e43faa8d8e2dfbb0596af
 ✔ Deleted event
fa110d517d2e43faa8d8e2dfbb0596af

Sending Events as Webhook Notifications

In addition to access via API and AnchoreCTL, the Anchore Enterprise may be configured to send notifications for events as they are generated in the system via its webhook subsystem. Webhook notifications for event log records is turned off by default. To turn enable the ’event_update’ webhook, uncomment the ’event_log’ section under ‘services->catalog’ in config.yaml, as in the following example:

services:
  ...
  catalog:
    ...
    event_log:
      notification:    
        enabled: True
        # (optional) notify events that match these levels. If this section is commented, notifications for all events are sent
        level:
        - error

Note: In order for events to be sent via webhook notifications, you’ll need to ensure that the webhook subsystem is configured in config.yaml (if it isn’t already) - refer to the document on subscriptions and notifications for information on how to enable webhooks in Anchore Enterprise. Event notifications will be sent to ’event_update’ webhook endpoint if it is defined, and the ‘general’ webhook endpoint otherwise.

Events via the UI

The Events tab is your gateway to current and historical activity happening in your system. View various events such as policy evaluation and vulnerability updates, system errors, feed syncs, and more.

The following sections in this document describe how to view event details, how to filter for specific events you’re interested in, and how to manage events with bulk deletion.

UI Events View

Viewing Events

In order to view events, navigate to the Events & Notifications > View Events tab. By default, the most recent activity (up to 1000 events) is shown and is automatically updated for you every 5 minutes. Note that if you have applied any filters through the search bar, your results will need to be refreshed manually.

Top-level details such as the event’s level (whether it’s an INFO or ERROR event), type, message, and affected resource is shown. Dig in to a specific event by clicking View Details under its Actions column to expand the row.

UI Event Details

Additional information such as the origininating service and host ID are available in the expanded row. Any details given by the service are also provided in JSON format to view or copy to clipboard.

Filtering Events

UI Events Search

Often, you might want to search for a specific event type or events that happened after a certain time. In this case, use the Search Events bar near the top of the page to select a filter to search on. These include:

Level
Filter events by level - INFO or ERROR
Event Type
Filter events by a match on the event type (e.g. “user.image.*”)
Since
Return events that occurred after the timestamp
Before
Return events that occurred before the timestamp
Source Servicename
Filter events by the originating service
Source Host ID
Filter events by the originating host ID
Resource Type
Filter events by the type of resource - tag, imageDigest, repository, etc.
Resource ID
Filter events by the id of the resource

Once you have selected and populated the filter fields you’re interested in, click Apply Filters to search and show those filtered results.

An alternative way to filter your results is through the in-table filter input. Note that this only applies against any data already fetched. To increase what you’re filtering on, click Fetch More near the top-right of the table for up to an additional 1000 items.

To remove any filters and reset to the default view, click Clear Filters.

Deleting Events

To assist with event management, event deletion has been added in the Enterprise 2.3 release.

Deleting individual events can be done simply through clicking Delete under the Actions column and selecting Yes to confirm. Note that after deletion, events are not recoverable.

Multi-select is available for deleting multiple events at a time. Upon selecting an event using the checkbox in the far-left column, a toolbar-like component will slide in at the bottom of the table. The number of events selected is shown along with the selection type, Clear Selection, and Delete Events options.

Checking the box in the header will select all events within that page.

UI Events Bulk Deletion

By default, it is viewed as a Custom selection. Choosing to select All Retrieved events auto-selects everything already fetched and present in the table (i.e. if a filter is applied, events not matching the filter are not selected but will be upon removal of the filter). In this state, deselecting an item will trigger a custom selection again.

Selecting All events will again auto-select all events already fetched and present in the table but while applying a filter may modify what’s viewable, this option is solely for clearing the entire backlog of events - including those not shown. In this state, deselecting an item will also trigger a custom selection.

Once you have selected the events you wish to remove, click Delete Events to open a modal and review up to 50 items. Any events you don’t wish to delete anymore can be deselected as well. To continue with removal, click Yes to confirm and start the process.

Note that events are account-wide and that any events removed will be mirrored across all users in the account.

6 - Upgrading Anchore Enterprise

Upgrading from one version of Anchore Enterprise to another is normally handled seamlessly by the Helm chart or the docker compose configuration files that are provided along with each release. Those follow the general methods from this guide. See Specific Instructions section for special instructions related to specific versions.

Upgrade scenarios

Anchore Enterprise is distributed as a docker image, which is composed of smaller micro-services that can be deployed in a single container or scaled out to handle load.

To retrieve the version of a running instance of Anchore, the anchorectl system status command can be run. The last column titled “CODE VERSION”, will display the running version of each service.

anchorectl system status
 ✔ Status system                                                                                                                                                                                                                                                            
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 25         │ 4.9.5        │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 25         │ 4.9.5        │
│ rbac_manager    │ anchore-quickstart │ http://rbac-manager:8228    │ true │ available      │ 25         │ 4.9.5        │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 25         │ 4.9.5        │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 25         │ 4.9.5        │
│ rbac_authorizer │ anchore-quickstart │ http://rbac-authorizer:8228 │ true │ available      │ 25         │ 4.9.5        │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 25         │ 4.9.5        │
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 25         │ 4.9.5        │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 25         │ 4.9.5        │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 25         │ 4.9.5        │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

In this example the Anchore version is 4.9.5 and the database schema is version 25. In cases where the database schema is changed between releases, Anchore will upgrade the database schema at launch.

Pre-upgrade Procedure

Prior to upgrading Anchore, it is highly recommended to perform a database backup/snapshot by stopping your Anchore installation, and backup the database in its entirety. There is no automatic downgrade capability, thus the only way to downgrade after an upgrade (whether it succeeds or fails) is to restore your database contents to a state from a prior version of Anchore, and explicitly run the compatible version of Anchore against the corresponding database contents.

Whether you wish to have the ability to downgrade or not, we recommend backing up your Anchore database prior to upgrading the software as a best practice.

Upgrade Procedure (for deployments using Helm)

A Helm pre-upgrade hook initiates a Kubernetes job that scales down all active Anchore Enterprise pods and handles the Anchore database upgrade.

The Helm upgrade is marked as successful only upon the job’s completion. This process causes the Helm client to pause until the job finishes and new Anchore Enterprise pods are initiated. To monitor the upgrade, follow the logs of the upgrade jobs. These jobs are automatically removed after a subsequent successful Helm upgrade.

An optional post-upgrade hook is available to perform Anchore Enterprise upgrades without forcing all pods to terminate prior to running the upgrade. This is the same upgrade behavior that was enabled by default in the legacy anchore-engine chart. To enable the post-upgrade hook, set upgradeJob.usePostUpgradeHook=true in your values file.

For the latest upgrade instructions using the Helm chart, please refer to the official Anchore Helm Chart documentation

Performing the Upgrade

  1. View the release notes for the latest Anchore Enterprise chart version and perform any necessary steps prior to upgrading.

  2. Update the Helm repository to get the latest chart version.

    helm repo update
    
  3. Upgrade Anchore Enterprise using the Helm chart.

    export NAMESPACE=anchore
    export RELEASE=my-release
    
    helm upgrade ${RELEASE} -n ${NAMESPACE} anchore/enterprise -f anchore_values.yaml
    

Upgrade Procedure (example with docker compose)

  1. Stop all running instances of Anchore

    docker compose down
    
  2. Make a copy of your original docker-compose.yaml file as backup

    cp docker-compose.yaml docker.compose.yaml.backup
    
  3. Download the latest docker-compose.yaml

    curl https://docs.anchore.com/current/docs/deployment/docker_compose/docker-compose.yaml > docker-compose.yaml
    
  4. Review the latest docker-compose.yaml and merge any edits/changes from your original docker-compose.yaml.backup to the latest docker-compose.yaml

  5. Restart the Anchore containers

    docker compose up -d
    

To monitor the progress of your upgrade, you can watch the docker logs from your catalog container, where you should see some initial output indicating whether or not an upgrade is needed or being performed, followed by the regular Anchore log output.

docker compose logs -f catalog

Once completed, you can review the new state of your Anchore install to verify the new version is running using the regular system status command.

anchorectl system status

Advanced / Manual Upgrade Procedure

If for any reason the automated upgrade fails, or you would like to perform the upgrade of the anchore database manually, you can use the following (general) procedure. This should only be done by advanced operators after backing up the anchore database, ensuring that the anchore database is up and running, and that all running anchore components are stopped.

  • Install the desired Anchore container manually.
  • Run the Anchore container but override the entrypoint to run an interactive shell instead of the default ‘anchore-manager service start’ entrypoint command.
  • Manually execute the database upgrade command, using the appropriate db_connect string. For example, if using Postgres, the db_connect string will look like postgresql://$ANCHORE_DB_HOST/$ANCHORE_DB_NAME?user=$ANCHORE_DB_USER&password=$ANCHORE_DB_PASSWORD
$ anchore-manager db --db-connect "postgresql://$ANCHORE_DB_HOST/$ANCHORE_DB_NAME?user=$ANCHORE_DB_USER&password=$ANCHORE_DB_PASSWORD" upgrade

[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_connect_args": {"timeout": 86400, "ssl": false}, "db_pool_size": 30, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
...
...
  • The output will indicate whether or not a database upgrade is needed. It will then prompt for confirmation if it is, and will display upgrade progress output before completing.

Specific Version Upgrades


This section is intended as a guide for any special instructions and information related to upgrading to specific versions of Enterprise.

Upgrading Enterprise v4.x to Enterprise v5.14.0

5.X Migration Guide

Upgrading Enterprise to 4.4.1

If you are upgrading from an Anchore Enterprise version prior to 4.2.0, there is a known issue that will require you to upgrade to 4.2.0 or 4.3.0 first. Once completed, you will have no issues upgrading to 4.4.1. Please contact Anchore Support if you need further assistance.

Please Note: This issue was addressed in 4.5.0. Upgrading from a version prior to 4.2.0 will succeed in 4.5.0 and newer releases.

6.1 - v4.x --> v5.x Migration Guide

This guide will help you understand, plan, and execute the migration of your Anchore deployment from

Enterprise v4.x --> Enterprise v5.14.0

The Enterprise v5.x Major Release involved several breaking changes. The migration to a v5.x release can be more complex than the regular Anchore feature release upgrade.

There are four significant component changes required to migrate to Enterprise v5.14.0 that each have their own migration paths. This document will help you migrate all components in a safe and downtime-minimizing way.

The components are:

  1. Anchore Enterprise: provides a new V2 API.
    • v5.14.0 only supports the new V2 API
    • v4.9.x supports both V1 and V2 APIs
  2. PostgreSQL Database: required version 13+ for Enterprise v5.14.0
  3. Enterprise Helm Chart:
    • v5.14.0 can be deployed only with the new enterprise Helm chart.
    • The older anchore-engine chart will be at end-of-life with the 4.x series.
  4. Integrations & Clients: all Anchore-provided integrations have new released versions that are compatible with v5.14.0 and support the new V2 API.

This guide will walk you through the process to go from this starting state.

Pre Migration: <= v4.8 with V1 API Only

graph
anchore("Enterprise <= v4.8 w/V1 API")
db[("PostgreSQL 9.6")]
chart["anchore-engine chart"]
ctl["anchorectl v1.x"]

anchore --uses--> db
chart --deploys--> anchore
ctl --v1 api calls-->anchore

To this ending state where you are in production running Enterprise v5.14.0.

Post Migration: Full v5.14.0 with V2 API Only

graph
anchore("Enterprise v5.14.0 w/V2 API")
db[("PostgreSQL 13+")]
chart["enterprise chart"]
ctl["anchorectl v5.14.0"]

anchore --uses--> db;
chart --deploys--> anchore
ctl --v2 api calls-->anchore

Note: The upgrade to v4.9.x is very strongly recommended for all deployments as a key part of the migration process to v5.14.0. If you use ANY integrations or API calls you should use v4.9.x and its dual-API support as the version of Anchore to run while you migrate all you integrations to use the V2 API.

Planning Your Migration

Timing: Each phase has different duration expectations, and below we’ll review the expectations and process for each phase of the migration. You should expect and plan for downtime for each phase except the client API migrations, which are done while the system is running.

The migration may be a multi-day process since it involves things like client migrations that may take days or weeks depending on your org and how many other systems are integrated with your Anchore deployment.

Combining Phases: Phases can be combined if you wish to use a smaller number of larger maintenance windows. Since combining phases increases the complexity of each phase and associated risk of misconfigurations or errors, the combination should be carefully considered for your specific needs and risk tolerance.

Migration Path 1: Chart-Managed Database

If you have PostgreSQL deployed in Kubernetes using the Anchore-Engine Helm Chart, then this is the migration path for you.

graph
    subgraph Start
    %% Start at v4.8.x or earlier, using postgres 9.6 and the anchore-engine helm chart
        anchore4("Enterprise <= v4.8.x")
        pg9[("PostgreSQL 9.6")]
        engineChart["anchore-engine chart"]
        anchorectl("anchorectl v1.7.x") --V1 api calls--> anchore4
        anchore4 --uses--> pg9
        engineChart --deploys--> anchore4
    end
    
    subgraph step1[Latest Enterprise v4.9.x]
        %% Upgrade to v4.9.x for V2 API
        anchore49_1("Enterprise v4.9.x")
        pg9_2[("PostgreSQL 9.6")]
        engineChart1["anchore-engine chart"]
        anchore49_1 --uses--> pg9_2
        anchorectl3("anchorectl v1.8.x") --V1 api calls--> anchore49_1
        engineChart1 --deploys--> anchore49_1
    end

    subgraph step2[Chart and DB Migrated]
    %% Migrate to new Chart & DB Migration to PG13, no Anchore version change
        anchore49("Enterprise = v4.9.x")
        pg13[("PostgreSQL 13+")]
        pg96[("PostgreSQL 9.6")]
        engineChart2["anchore-engine chart"]
        enterpriseChart["enterprise chart"]
        engineChart2 --uses--> pg96
        pg96 --migrates to--> pg13
        anchore49 --uses--> pg13
        anchorectl2("anchorectl v1.8.x") --V1 api calls--> anchore49
        enterpriseChart --deploys--> anchore49
    end

    subgraph step3[Integrations Migrated]
    %% Upgrade integrations/AnchoreCTL
        anchoreInter3("Enterprise v4.9.x")
        engineChart3["anchore-engine chart"]
        enterpriseChart2["enterprise chart"]
        pg13_4[("PostgreSQL 13+")]
        pg96_2[("PostgreSQL 9.6")]
        engineChart3 --> pg96_2
        anchoreInter3 --> pg13_4
        anchorectl5("anchorectl v4.9.x") --V2 api calls--> anchoreInter3
        enterpriseChart2 --deploys--> anchoreInter3
    end

    subgraph finish["Enterprise v5.14.0"]
    %% Upgrade to v5.14.0
        anchore5("Enterprise v5.14.0")
        enterpriseChart3["enterprise chart"]
        pg13_5[("PostgreSQL 13+")]
        anchore5 --> pg13_5
        anchorectl6("anchorectl v5.14.0") --V2 api calls--> anchore5
        enterpriseChart3 --deploys--> anchore5
    end

    Start --Upgrade Anchore Enterprise to latest v4.9.x release--> step1;
    step1 --Migrate to Enterprise Chart and PG13+ DB--> step2;
    step2 --Migrate integrations & anchorectl to use V2 API--> step3;
    step3 --Upgrade Anchore Enterprise to v5.14.0 & delete 4.0.x deployment--> finish;

Step 1: Upgrade Anchore Enterprise to latest v4.9.x Release

Downtime: Required

Upgrade your Anchore deployment to v4.9.x. This is an important step for several reasons:

  1. It is supported by both the legacy anchore-engine helm chart and the new enterprise helm chart
  2. It supports PostgreSQL 9.6+ and newer (13+), so it provides a stable base to execute the other upgrade steps
  3. It supports both the V1 and V2 APIs, so you can have a stable Anchore version for updating all your integrations

Upgrade mechanism: normal Anchore Enterprise upgrade process

Step 2: Migrate to Enterprise Chart, 4.9.x and PostgreSQL 13

Downtime: Required

Helm Migration Guide

Step 3: Migrate all integrations and clients to V2 API compatible versions

Downtime: None for Anchore itself, but individual integrations may vary

Once your deployment is running v4.9.x you have a stable platform to migrate integrations and clients to using the V2 API of Enterprise. You should perform the upgrades/migrations for the new V2 API in this phase. This phase may last for a while and does not end until all your API calls are using the V2 endpoint instead of V1.

IntegrationRecommended V2 API Compatible Version
AnchoreCTLv4.9.0
anchore-k8s-inventoryv1.1.1
anchore-ecs-inventoryv1.2.0
Kubernetes Admission Controllerv0.5.0
Jenkins Pluginv1.1.0
Harbor Scanner Adapterv1.2.0
enterprise-gitlab-scanv4.0.0

Upgrading AnchoreCTL Usage in CI

The installation script provided via Deploying AnchoreCTL will only automatically deploy new releases that are V1 API compatible, so you need to update use of that script to include specific versions.

For example, use:

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v4.9.0

Confirming V1 API is no longer in use

To verify that all clients have been updated, you can review the logs from the API containers in your v4.9.x deployment. We recommend that you monitor for multiple days to verify there are no periodic processes that still use the old endpoint.

Step 4: Upgrade from Enterprise 4.9.x to 5.9 using 2.10.0 of the chart

Downtime: required

Helm Upgrade Guide

Upgrading AnchoreCTL

You will want to install the compatible version of AnchoreCTL 5.9.0 at this time as well.

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> 5.9.0

Step 5: Upgrade from Enterprise 5.9 to Current Version

Helm Upgrade Guide

Upgrading AnchoreCTL

You will want to install the compatible version of AnchoreCTL (v5.14.0) at this time as well:

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v5.14.0

Migration Path 2: External DB

If you deploy PostgreSQL using any mechanism other than the Anchore-provided chart (e.g. AWS RDS, your own DB chart, Google CloudSQL, etc.), then this is the migration plan for you.

graph
    subgraph Start[Enterprise v4.x]
        anchoreStart("Enterprise <= v4.8.X")
        pg9[("PostgreSQL 9.6")]
        engineChart["anchore-engine chart"]
        anchorectl("anchorectl v1.7.x") --V1 api calls--> anchoreStart
        anchoreStart --uses--> pg9
        engineChart --deploys--> anchoreStart
    end
    
    subgraph step1[Latest Enterprise v4.9.x]
        %% Upgrade to v4.9.x for V2
        anchoreInter1("Enterprise v4.9.x")
        pg9_2[("PostgreSQL 9.6")]
        engineChart2["anchore-engine chart"]
        anchoreInter1 --uses--> pg9_2
        anchorectl3("anchorectl v1.8.x") --V1 api calls--> anchoreInter1
        engineChart2 --deploys--> anchoreInter1
    end

    subgraph step2[Enterprise Helm Chart]
        %% Use new chart
        anchoreInter2("Enterprise v4.9.x")
        enterpriseChart["enterprise chart"]
        pg9_3[("PostgreSQL 9.6")]
        anchoreInter2 --> pg9_3
        anchorectl4("anchorectl v1.8.x") --V1 api calls--> anchoreInter2
        enterpriseChart --deploys--> anchoreInter2
    end
    
    subgraph step3[PostgreSQL 13+]
    %% Migrate to PG13+ , no Anchore version change
        anchoreInter3("Enterprise = v4.9.x")
        pg13[("PostgreSQL 13+")]
        enterpriseChart2["enterprise chart"]
        anchoreInter3 --uses--> pg13
        anchorectl2("anchorectl v1.8.x") --V1 api calls--> anchoreInter3
        enterpriseChart2 --deploys--> anchoreInter3
    end
    
    subgraph step4[Integrations using V2 API]
    %% Upgrade integrations/AnchoreCTL
        anchoreInter4("Enterprise v4.9.x")
        enterpriseChart3["enterprise chart"]
        pg13_4[("PostgreSQL 13+")]
        anchoreInter4 --> pg13_4
        anchorectl5("anchorectl v4.9.x") --V2 api calls--> anchoreInter4
        enterpriseChart3 --deploys--> anchoreInter4
    end

    subgraph finish[Enterprise v5.14.0]
    %% Upgrade to v5.14.0
        anchore5("Enterprise v5.14.0")
        enterpriseChart4["enterprise chart"]
        pg13_5[("PostgreSQL 13+")]
        anchore5 --> pg13_5
        anchorectl6("anchorectl v5.14.0") --V2 api calls--> anchore5
        enterpriseChart4 --deploys--> anchore5
    end

    Start --Upgrade to latest v4.9.x Enterprise--> step1;
    step1 --Migrate to Enterprise Helm Chart--> step2;
    step2 --Upgrade External DB to PostgreSQL 13+--> step3;
    step3 --Migrate Integrations and AnchoreCTL to use V2 API--> step4;
    step4 --Upgrade Anchore to v5.14.0 --> finish;

Step 1: Upgrade to latest Anchore Enterprise v4.9.x

Downtime: Required

Upgrade your Anchore deployment to v4.9.x. This is an important step for several reasons:

  1. It is supported by both the legacy anchore-engine helm chart and the new enterprise helm chart
  2. It supports PostgreSQL 9.6+ and newer (13+), so it provides a stable base to execute the other upgrade steps
  3. It supports both the V1 and V2 APIs, so you can have a stable Anchore version for updating all your integrations

Step 2: Upgrade PostgreSQL from 9.6.x to 13+

Downtime: Required

Enterprise v5.14.0 requires PostgreSQL 13 or later to run. The DB upgrade process will be specific to your deployment mechanisms and way of running Postgres. Depending on what version of PostgreSQL you are running when you start, there may be multiple DB upgrade operations necessary in PostgreSQL to get to 13+.

However, this upgrade can be done with any Anchore version. All 4.x versions of Anchore already support PostgreSQL 13+, so the DB upgrade can be executed outside any changes to the Anchore deployment itself.

If you are using AWS RDS or another cloud platform for hosting your PostgreSQL database, please refer to their upgrade documentation for the best practices to upgrade your instance(s) to version 13 or higher.

Step 3: Migrate to Enterprise Helm Chart

Downtime: Required

Helm Migration Guide

Step 4: Upgrade all your integrations/clients to use the V2 API

Downtime: None for Anchore itself, but individual integrations may vary

Once your deployment is running v4.9.x you have a stable platform to migrate integrations and clients to using the V2 API of Enterprise. You should perform the upgrades/migrations for the new V2 API in this phase. This phase may last for a while and does not end until all your API calls are using the V2 endpoint instead of V1.

IntegrationRecommended V2 API Compatible Version
AnchoreCTLv4.9.0
anchore-k8s-inventoryv1.1.1
anchore-ecs-inventoryv1.2.0
Kubernetes Admission Controllerv0.5.0
Jenkins Pluginv1.1.0
Harbor Scanner Adapterv1.2.0
enterprise-gitlab-scanv4.0.0

Upgrading AnchoreCTL Usage in CI

The installation script provided via Deploying AnchoreCTL will only automatically deploy new releases that are V1 API compatible, so you need to update use of that script to include specific versions.

For example, use:

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v4.9.0

Confirming V1 API is no longer in use

To verify that all clients have been updated, you can review the logs from the API containers in your v4.9.x deployment. We recommend that you monitor for multiple days to verify there are no periodic processes that still use the old endpoint.

Step 5: Upgrade from Enterprise 4.9.x to 5.9 using 2.10.0 of the chart

Downtime: required

Helm Upgrade Guide

Upgrading AnchoreCTL

You will want to install the compatible version of AnchoreCTL 5.9.0 at this time as well.

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> 5.9.0

Step 6: Upgrade from Enterprise 5.9 to Current Version

Helm Upgrade Guide

Upgrading AnchoreCTL

You will want to install the compatible version of AnchoreCTL (v5.14.0) at this time as well:

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v5.14.0

Verifying the Upgrade

Verify the version you’re using of AnchoreCTL

anchorectl version – All users should see v5.14.0 for the AnchoreCTL version

anchorectl system status – The system should return v5.14.0

7 - Anchore Secure - Vulnerability Management

Vulnerability management is the practice of identifying, categorising and remediating security vulnerabilities in software. Using a Software Bill of Materials (SBOM) as a foundation, Anchore Enterprise provides a mechanism for scanning containerized software for security vulnerabilities. With 100% API coverage and fully-documented APIs, users can automate vulnerability scanning and monitoring - performing scans in CI/CD pipelines, registries and Kubernetes platforms. Furthermore, Anchore Enterpise allows users to identify malware, secrets, and other security risks through catalogers.

Jump into a particular topic using the links below:

Vulnerability Data Sources

Vulnerability matching in Anchore Enterprise and Grype begins with collecting vulnerability data from multiple sources to identify vulnerabilities in the packages cataloged within an SBOM.

Anchore Enterprise and Grype consolidate data from these sources into a format suitable for vulnerability identification in SBOMs. One key source of data is the National Vulnerability Database (NVD). The NVD serves as a widely recognized, vendor-independent resource for vulnerability identification. Additionally, it provides a framework for measuring the severity of vulnerabilities. For instance, the NVD introduced the Common Vulnerability Scoring System (CVSS), which assigns numerical scores ranging from 0 to 10 to indicate the severity of vulnerabilities. These scores help organizations prioritize vulnerabilities based on their potential impact.

However, due to known limitations with NVD data, relying on additional sources becomes essential. Anchore Enterprise and Grype also collect vulnerability data from vendor-specific databases, which play a crucial role in accurate and efficient detection. These sources enable vulnerability matching from the vendor’s perspective. Examples of such vendor-specific databases include GitHub, the Microsoft Security Response Center (MSRC), and the Red Hat Security Response Database, among others.

Data Import & Normalization

Anchore has a tool called vunnel that is responsible for reaching out to various data sources, parsing and normalizing that data, then storing it for future use.

There is not one standard format for publishing vulnerability data, and even when there is a standardized data format, such as OVAL or OSV, those formats often have minor incompatible differences in their implementation. The purpose of vunnel is to understand each data source then output a single consistent format that can be used to construct a vulnerability database.

Providers

The process begins with vunnel reaching out to vulnerability data sources. These sources are known as “providers”. The following are a list of vunnel Providers:

  • Alpine: Focuses on lightweight Linux distributions and provides vulnerability data tailored specifically to Alpine packages.
  • Amazon: Offers vulnerability data for its cloud services and Linux distributions, such as Amazon Linux.
  • Chainguard: Specializes in securing software supply chains and delivers vulnerability insights for containerized environments.
  • Debian: Maintains a robust security tracker for vulnerabilities in its packages, concentrating on open-source software used in Debian-based systems.
  • GitHub: Provides vulnerability data supported by an extensive advisory database for developers.
  • Mariner (CBL-Mariner): Microsoft’s Linux distribution, provides vulnerability data within its ecosystem.
  • NVD (National Vulnerability Database): Serves as the official U.S. government repository of vulnerability information.
  • Oracle: Tracks vulnerabilities in Oracle Linux and other Oracle products, focusing on enterprise environments.
  • RHEL (Red Hat Enterprise Linux): Delivers detailed and timely vulnerability data for Red Hat products.
  • SLES (SUSE Linux Enterprise Server): Offers vulnerability data for SUSE Linux products, with a strong focus on enterprise solutions, particularly in cloud and container environments.
  • Ubuntu: Maintains a well-documented vulnerability tracker and provides regular security updates for its popular Linux distribution.
  • Wolfi: It is a community-driven, secure-by-default Linux-based distribution that emphasizes supply chain security and provides reliable vulnerability tracking.

vunnel reaches out to all of these providers, collates vulnerability data and consolidates it for use. The end product of the operations of vunnel is what we call the Grype database (GrypeDB).

Building GrypeDB

When the data from vunnel is collected into a database, we call that GrypeDB. This is a sqlite database that is used by both Grype and Anchore Enterprise for matching vulnerabilities. The Anchore Enterprise database and the Grype database (consolidated by vunnel) are not the same data. The hosted Anchore Enterprise database contains the consolidated GrypeDB as well as the Exclusion database and Microsoft MSRC vulnerability data.

Non-Anchore (upstream) Data Updates

When there are problems with other data sources, we contact those upstream sources and work with them to correct issues. Anchore has an “upstream first” policy for data corrections. Whenever possible we will work with upstream data sources rather than trying to correct only our data. We believe this creates a better overall vulnerability data ecosystem, and fosters beneficial collaboration channels between Anchore and the upstream projects.

An example of how we submit upstream data updates can be seen in the GitHub Advisory Database HERE

Data Enrichment

Due to the known issues with the NVD, Anchore Enterprise enhances the quality of its data for analysis by enriching the information obtained from the NVD. This process involves human intervention to review and correct the data. Once this manual process is completed, the cleaned and refined data is stored in the Anchore Enrichment Database.

The Anchore Enriched Data can be reviewed in GitHub HERE.

The scripts that drive this enrichment process are also in GitHub HERE

Before implementing this process, correcting NVD data was a challenge. However, with our enrichment process, we now have the flexibility to make changes to affected products and versions. The key advantage is that the data used by Anchore Enterprise is now highly reliable, ensuring that any downloaded data is accurate and free from the common issues associated with NVD data.

An example of enriching NVD data for more accurate detection is CVE-2024-4030, which was initially identified as affecting Debian when it actually only impacts Windows. By applying our enrichment process, we were able to correct this error.

Vulnerability Matching Process

When it’s time to compare the data from an SBOM to the vulnerability data constructed into Anchore Enterprise, we call that matching. Anytime vulnerability data is surfaced in Anchore Enterprise, that data is the result of vulnerability matches.

CPE Matching

CPE which stands for Common Platform Enumeration is a structured naming scheme standardized by the National Institute of Standards and Technology (NIST) to describe software, hardware, and firmware. It uses a standardized format that helps tools and systems compare and identify products efficiently.

CPE matching involves comparing the CPEs found in the SBOM of a software product against a list of known CPE entries to find a match. The diagram below illustrates the steps involved in CPE matching.

CPE Matching

Due to the current state of the NVD data as mentioned above, CPE matching is sometimes leads to false positives match, this led to the creation of the exclusion data we manage in Anchore.

Vulnerability Match Exclusions

There are times we cannot solve a false positive match using data alone. This is generally due to limitations of how CPE matching works. In those instances, Anchore Enterprise has a concept called Vulnerability Match Exclusions. These exclusions allow us to remove a vulnerability from the findings for a specific set of match criteria.

The data for the vulnerability match exclusions is held in a private repository. The data behind this list is not included in the open source Grype vulnerability data.

For example, if we look at CVE-2012-2055 which is a vulnerability reported against the GitHub product. When trying to match a CPE against this CVE, CPE is unable to capture this level of detail. GitHub libraries for different ecosystems will show up as affected. The Python GitHub library is an example. In order to resolve this, we exclude the language ecosystems using a match exclusion.

Vuln Match Exclusion

The exclusion data can be seen below:

exclusions:
- constraints:
  - namespaces:
    - nvd:cpe
    packages:
    - language: java
    - language: python
    - language: javascript
    - language: ruby
    - language: rust
    - language: go
    - language: php
  justification: This vulnerability affects the GitHub product suite, not language-specific clients
id: CVE-2012-2055

Matching

The matching process that happens against an SBOM is the same basic process in both Grype and Anchore Enterprise. The vulnerability data is stored in GrypeDB, details such as vulnerability ID, package and versions affected as well as fix information are part of these records.

For example for vulnerability CVE-2024-9823 we store the package name, Jetty, the fixed version, 9.4.54, and which ecosystems are affected, such as Debian, NVD, and Java. We call these ecosystems a namespace in the context of a match.

The namespace used for the match is determined by the package stored in the SBOM. For a Debian package, the Debian namespace would be used, for Java - GitHub will be used by default in Grype, but NVD will be used by default in Anchore Enterprise. The default matcher for Java can be changed in Anchore Enterprise, we encourage you to do so as it will result in higher quality matches. We will be changing this default in a future release. See disabling CPE matching per supported ecosystem

The details about the versions affected will be used to determine if the version reported by the SBOM falls within the affected range. If it does, the vulnerability matches.

For a successful match, the fixed details field will be used to display which version fixes a particular vulnerability. The fix details are specific to each namespace. The version in Debian that fixes this vulnerability, 9.4.54-1, is not the same as the version that fixes the Java package, 9.4.54.

It should also be noted that if a vulnerability appears on the match exclusion list, it would be removed as a match.

Once a match exists, then additional metadata can be surfaced. We store details such as severity and CVSS in this table. Sometimes a field could be missing, such as severity or CVSS. Missing fields will be filled in with the data from NVD if it is available there.

Vulnerability Matching Configuration

Search by CPE can be globally configured per supported ecosystem via the anchore enterprise policy engine config. The default enables search by cpe for all ecosystems except for javascript (since NPM package vulnerability reports are exhaustively covered by the GitHub Security Advisory Database).

A fully-specified default config is as below:

policy_engine:
    vulnerabilities:
      matching:
        default:
          search:
            by_cpe:
              enabled: true
        ecosystem_specific:
          dotnet:
            search:
              by_cpe:
                enabled: true
          golang:
            search:
              by_cpe:
                enabled: true
          java:
            search:
              by_cpe:
                enabled: true
          javascript:
            search:
              by_cpe:
                enabled: false
          python:
            search:
              by_cpe:
                enabled: true
          ruby:
            search:
              by_cpe:
                enabled: true
          stock:
            search:
              by_cpe:
                # Disabling search by CPE for the stock matcher will entirely disable binary-only matches
                # and is *NOT ADVISED*
                enabled: true

A shorter form of the default config is:

policy_engine:
    vulnerabilities:
      matching:
        default:
          search:
            by_cpe:
              enabled: true
        ecosystem_specific:
          javascript:
            search:
              by_cpe:
                enabled: false

If disabling search by CPE for all GitHub covered ecosystems is desired, the config would look like:

policy_engine:
    vulnerabilities:
      matching:
        default:
          search:
            by_cpe:
              enabled: false
        ecosystem_specific:
          stock:
            search:
              by_cpe:
                enabled: true

Comprehensive Distributions

When matching vulnerabilities against a Linux distribution, such as Alpine, Red Hat, or Ubuntu, there is a concept we call “comprehensive distribution”. A comprehensive distribution reports both fixed and unfixed vulnerabilities in their data feed.

For example, Red Hat reports on all vulnerabilities, including unfixed vulnerabilities. Some distros, like Alpine, do not report unfixed vulnerabilities. When a distribution does not contain comprehensive vulnerability information, we fall back to other data sources as a best effort to determine vulnerabilities that affect Alpine and are not fixed yet.

Fix Details

There are some additional details for the fixed data from NVD that should be explained. NVD doesn’t contain explicit fix information for a given vulnerability. Other namespaces do, such as GitHub and Debian. There is a concept of “Less Than” and “Less Than or Equal” in the NVD data. When a vulnerability is tagged with “Less Than or Equal”, it could mean there is no fix available, or the fix couldn’t be figured out, or a fix was unavailable at the time NVD looked at it. In those cases we cannot show fix details for a vulnerability match.

If NVD uses “Less Than”, it is assumed that the version noted is the fixed version, unless that version is part of the affected range of a subsequent CPE configuration for the same CVE. We will present that version as containing the fix.

For example if we see data that looks like this:

some_package LessThan 1.2.3

We would assume version 1.2.3 contains the fix, and any version less than that, such as 1.2.2 is vulnerable. Alternatively, if we see:

some_package LessThanOrEqual 1.2.2

We know version 1.2.2 and below are vulnerable. We however do not know which version contains the fix. It could be in version 1.3.0, or 1.2.3, or even 2.0.0. In these cases we do not surface fixed details. If we are able to figure out such details in the future, we will update our CVE data.

7.1.1 - Analyzing Images via CTL

Introduction

In this section you will learn how to analyze images with Anchore Enterprise using AnchoreCTL in two different ways:

  1. Distributed Analysis: Content analysis by AnchoreCTL where it is run and importing the analysis to your Anchore deployment
  2. Centralized Analysis: The Anchore deployment downloads and analyzes the image content directly

Using AnchoreCTL for Centralized Analysis

Overview

This method of image analysis uses the Enterprise deployment itself to download and analyze the image content. You’ll use AnchoreCTL to make API requests to Anchore to tell it which image to analyze but the Enterprise deployment does the work. You can refer to the Image Analysis Process document in the concepts section to better understand how centralized analysis works in Anchore.

sequenceDiagram
    participant A as AnchoreCTL
    participant R as Registry
    participant E as Anchore Deployment
    A->>E: Request Image Analysis
    E->>R: Get Image content
    R-->>E: Image Content
    E->>E: Analyze Image Content (Generate SBOM and secret scans etc) and store results
    E->>E: Scan sbom for vulns and evaluate compliance

Usage

The anchorectl image add command instructs the Anchore Enterprise deployment to pull (download) and analyze an image from a registry. Anchore Enterprise will attempt to retrieve metadata about the image from the Docker registry and if successful will initiate a pull of the image and queue the image for analysis. The command will output details about the image including the image digest, image ID, and full name of the image.

# anchorectl image add docker.io/library/nginx:latest
anchorectl image add docker.io/library/nginx:latest
 ✔ Added Image 
Image:
  status:           not-analyzed (active)
  tag:              docker.io/library/nginx:latest
  digest:           sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
  id:               2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763

For an image that has not yet been analyzed, the status will appear as not_analyzed. Once the image has been downloaded it will be queued for analysis. When the analysis begins the status will be updated to analyzing, after which te status will update to analyzed.

Anchore Enterprise can be configured to have a size limit for images being added for analysis. Attempting to add an image that exceeds the configured size will fail, return a 400 API error, and log an error message in the catalog service detailing the failure. This feature is disabled by default so see documentation for additional details on the functionality of this feature and instructions on how to configure the limit.

Using AnchoreCTL for Distributed Analysis

Overview

This way of adding images uses anchorectl to performs analysis of an image outside the Enterprise deployment, so the Enterprise deployment never downloads or touches the image content directly. The generation of the SBOM, secret searches, filesystem metadata, and content searches are all performed by AnchoreCTL on the host where it is run (CI, laptop, runtime node, etc) and the results are imported to the Enterprise deployment where it can be scanned for vulnerabilities and evaluated against policy.

sequenceDiagram
    participant A as AnchoreCTL
    participant R as Registry/Docker Daemon
    participant E as Anchore Deployment
    A->>R: Get Image content
    R-->>A: Image Content
    A->>A: Analyze Image Content (Generate SBOM and secret scans etc)
    A->>E: Import SBOM, secret search, fs metadata
    E->>E: Scan sbom for vulns and evaluate compliance

Configuration

Enabling the full set of analyzers, “catalogers” in AnchoreCTL terms, requires updates to the config file used by AnchoreCTL. See Configuring AnchoreCTL for more information on the format and options.

Usage

The anchorectl image add --from [registry|docker] command will run a local SBOM-generation and analysis (secret scans, filesystem metadata, and content searches) and upload the result to Anchore Enterprise without ever having that image touched or loaded by your Enterprise deployment.

# anchorectl image add docker.io/library/nginx:latest --from registry
anchorectl image add docker.io/library/nginx:latest --from registry -n
 ✔ Added Image 
Image:
  status:           not-analyzed (active)
  tag:              docker.io/library/nginx:latest
  digest:           sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
  id:               2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763

For an image that has not yet been analyzed, the status will appear as not_analyzed. Once the image has been downloaded it will be queued for analysis. When the analysis begins the status will be updated to analyzing, after which te status will update to analyzed.

The ‘–platform’ option in distributed analysis specifies a different platform than the local hosts’ to use when retrieving the image from the registry for analysis by AnchoreCTL.

# anchorectl image add alpine:latest --from registry --platform linux/arm64

Adding images that you own

For images that you are building yourself, the Dockerfile used to build the image should always be passed to Anchore Enterprise at the time of image addition. This is achieved by adding the image as above, but with the additional option to pass the Dockerfile contents to be stored with the system alongside the image analysis data.

This can be achieved in both analysis modes.

For centralized analysis:

# anchorectl image add myrepo.example.com:5000/app/webapp:latest --dockerfile /path/to/Dockerfile

For distributed analysis:

# anchorectl image add myrepo.example.com:5000/app/webapp:latest --from registry --dockerfile /path/to/Dockerfile

To update an image’s Dockerfile, simply run the same command again with the path to the updated Dockerfile along with ‘–force’ to re-analyze the image with the updated Dockerfile. Note that running add without --force (see below) will not re-add an image if it already exists.

Providing Dockerfile content is supported in both push and pull modes for adding images.

Additional Options

When adding an image, there are some additional (optional) parameters that can be used. We show some examples below and all apply to both distributed and centralize analysis workflows.

# anchorectl image add docker.io/library/alpine:latest --force
 ✔ Added Image                                                                                                                                                                                                                             docker.io/library/alpine:latest
Image:
  status:           not-analyzed (active)
  tags:             docker.io/alpine:3
                    docker.io/alpine:latest
                    docker.io/dnurmi/testrepo:test0
                    docker.io/library/alpine:latest
  digest:           sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870
  id:               9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5
  distro:           [email protected] (amd64)
  layers:           1

the --force option can be used to reset the image analysis status of any image to not_analyzed, which is the base analysis state for an image. This option shouldn’t be necessary to use in normal circumstances, but can be useful if image re-analysis is needed for any reason desired.

# anchorectl image add myrepo.example.com:5000/app/webapp:latest --dockerfile /path/to/dockerfile --annotation owner=someperson --annotation [email protected]

the --annotation parameter can be used to specify ‘key=value’ pairs to associate with the image at the time of image addition. These annotations will then be carried along with the tag, and will appear in image records when fetched, and in webhook notification payloads that contain image information when they are sent from the system. To change an annotation, simply run the add command again with the updated annotation and the old annotation will be overriden.

# anchorectl image add alpine:latest --no-auto-subscribe

the ‘–no-auto-subscribe’ flag can be used if you do not wish for the system to automatically subscribe the input tag to the ’tag_update’ subscription, which controls whether or not the system will automatically watch the added tag for image content updates and pull in the latest content for centralized analysis. See Subscriptions for more information about using subscriptions and notifications in Anchore.

These options are supported in both distributed and centralized analysis.

Image Tags

In this example, we’re adding docker.io/mysql:latest, if we attempt to add a tag that mapped to the same image, for example docker.io/mysql:8 Anchore Enterprise will detect the duplicate image identifiers and return a detail of all tags matching that image.

Image:
  status:           analyzed (active)
  tags:             docker.io/mysql:8
                    docker.io/mysql:latest
  digest:           sha256:8191525e9110aa32b436a1ec772b76b9934c1618330cdb566ca9c4b2f01b8e18
  id:               4390e645317399cc7bcb50a5deca932a77a509d1854ac194d80ed5182a6b5096
  distro:           [email protected] (amd64)
  layers:           11

Deleting An Image

The following command instructs Anchore Enterprise to delete the image analysis from the working set using a tag. The --force option must be used if there is only one digest associated with the provided tag, or any active subscriptions are enabled against the referenced tag.

# anchorectl image delete mysql:latest --force
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST                                                                  │ STATUS   │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:8191525e9110aa32b436a1ec772b76b9934c1618330cdb566ca9c4b2f01b8e18 │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘

To delete a specific image record, the digest can be supplied instead to ensure it is the exact image record you want:

# anchorectl image delete sha256:899a03e9816e5283edba63d71ea528cd83576b28a7586cf617ce78af5526f209
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST                                                                  │ STATUS   │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:899a03e9816e5283edba63d71ea528cd83576b28a7586cf617ce78af5526f209 │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘

Deactivate Tag Subscriptions

Check if the tag has any active subscriptions.


# anchorectl subscription list
anchorectl subscription list
 ✔ Fetched subscriptions
┌──────────────────────────────────────────────────────────────────────┬─────────────────┬────────┐
│ KEY                                                                  │ TYPE            │ ACTIVE │
├──────────────────────────────────────────────────────────────────────┼─────────────────┼────────┤
│ docker.io/alpine:latest                                              │ policy_eval     │ false  │
│ docker.io/alpine:3.12.4                                              │ policy_eval     │ false  │
│ docker.io/alpine:latest                                              │ vuln_update     │ false  │
│ docker.io/redis:latest                                               │ policy_eval     │ false  │
│ docker.io/centos:8                                                   │ policy_eval     │ false  │
...
...

If the tag has an active subscription(s), then can disabled (deactivated) in order to permit deletion:

# anchorectl subscription deactivate docker.io/alpine:3.12.6 tag_update
 ✔ Deactivate subscription
Key: docker.io/alpine:3.12.6
Type: tag_update
Id: a6c7559deb7d5e20621d4a36010c11b0
Active: false

Advanced

Anchore Enterprise also allows adding images directly by digest / tag / timestamp tuple, which can be useful to add images that are still available in a registry but not associated with a current tag any longer.

To add a specific image by digest with the tag it should be associated with:

anchorectl image add docker.io/nginx:stable@sha256:f586d972a825ad6777a26af5dd7fc4f753c9c9f4962599e6c65c1230a09513a8

Note: this will submit the specific image by digest with the associated tag, but Anchore will treat that digest as the most recent digest for the tag, so if the image registry actually has a different history (e.g. a newer image has been pushed to that tag), then the tag history in Anchore may not accurately reflect the history in the registry.

Next Steps

Next, let’s find out how to Inspect Image Content

7.1.1.1 - Inspecting Image Content

Introduction

During the analysis of container images, Anchore Enterprise performs deep inspection, collecting data on all artifacts in the image including files, operating system packages and software artifacts such as Ruby GEMs and Node.JS NPM modules.

Inspecting images

The image content command can be used to return detailed information about the content of the container image.

# anchorectl image content INPUT_IMAGE -t CONTENT_TYPE

The INPUT_IMAGE can be specified in one of the following formats:

  • Image Digest
  • Image ID
  • registry/repo:tag

the CONTENT_TYPE can be one of the following types:

  • os: Operating System Packages
  • files: All files in the image
  • go: GoLang modules
  • npm: Node.JS NPM Modules
  • gem: Ruby GEMs
  • java: Java Archives
  • python: Python Artifacts
  • nuget: .NET NuGet Packages
  • binary: Language runtime locations and version (e.g. openjdk, python, node)
  • malware: ClamAV mailware scan results, if enabled

You can always get the latest available content types using the ‘-a’ flag:

# anchorectl image content library/nginx:latest -a
 ✔ Fetched content                           [fetching available types]                                                                                                                                    library/nginx:latest
binary
files
gem
go
java
malware
npm
nuget
os
python

For example:

# anchorectl image content library/nginx:latest -t files
 ✔ Fetched content                           [0 packages] [6099 files]                                                                                                                                                                                                                                                   library/nginx:latest
Files:
┌────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┬───────┬─────┬─────┬───────┬───────────────┬──────────────────────────────────────────────────────────────────┐
│ FILE                                                                                               │ LINK                                                                                               │ MODE  │ UID │ GID │ TYPE  │ SIZE          │ SHA256 DIGEST                                                    │
├────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┼───────┼─────┼─────┼───────┼───────────────┼──────────────────────────────────────────────────────────────────┤
│ /bin                                                                                               │                                                                                                    │ 00755 │ 0   │ 0   │ dir   │ 0             │                                                                  │
│ /bin/bash                                                                                          │                                                                                                    │ 00755 │ 0   │ 0   │ file  │ 1.234376e+06  │ d86b21405852d8642ca41afae9dcf0f532e2d67973b0648b0af7c26933f1becb │
│ /bin/cat                                                                                           │                                                                                                    │ 00755 │ 0   │ 0   │ file  │ 43936         │ e9165e34728e37ee65bf80a2f64cd922adeba2c9f5bef88132e1fc3fd891712b │
│ /bin/chgrp                                                                                         │                                                                                                    │ 00755 │ 0   │ 0   │ file  │ 72672         │ f47bc94792c95ce7a4d95dcb8d8111d74ad3c6fc95417fae605552e8cf38772c │
│ /bin/chmod                                                                                         │                                                                                                    │ 00755 │ 0   │ 0   │ file  │ 64448         │ b6365e442b815fc60e2bc63681121c45341a7ca0f540840193ddabaefef290df │
│ /bin/chown                                                                                         │                                                                                                    │ 00755 │ 0   │ 0   │ file  │ 72672         │ 4c1443e2a61a953804a462801021e8b8c6314138371963e2959209dda486c46e │
...

AnchoreCTL will output a subset of fields from the content view, for example for files on the file name and size are displayed. To retrieve the full output the --json parameter should be passed.

For example:

# anchorectl -o json image content library/nginx:latest -t files
 ✔ Fetched content                           [0 packages] [6099 files]                                                                                                                                     library/nginx:latest
{
  "files": [
    {
      "filename": "/bin",
      "gid": 0,
      "linkdest": null,
      "mode": "00755",
      "sha256": null,
      "size": 0,
      "type": "dir",
      "uid": 0
    },
...

Next Steps

7.1.1.2 - Viewing Security Vulnerabilities

Introduction

The image vulnerabilities command can be used to return a list of vulnerabilities found in the container image.

# anchorectl image vulnerabilities INPUT_IMAGE -t VULN_TYPE

The INPUT_IMAGE can be specified in one of the following formats:

  • Image Digest
  • Image ID
  • registry/repo:tag

The VULN_TYPE currently supports:

  • os: Vulnerabilities against operating system packages (RPM, DPKG, APK, etc.)
  • non-os: Vulnerabilities against language packages (NPM, GEM, Java Archive (jar, war, ear), Python PIP, .NET NuGet, etc.)
  • all: Combination report containing both ‘os’ and ’non-os’ vulnerability records.

The system has been designed to incorporate 3rd party feeds for other vulnerabilites.

Examples

To generate a report of OS package (RPM/DEB/APK) vulnerabilities found in the image including CVE identifier, Vulnerable Package, Severity Level, Vulnerability details and version of fixed package (if available).

# anchorectl image vulnerabilities debian:latest -t os

Currently the following the system draws vulnerability data specifically matched to the following OS distros:

  • Alpine
  • CentOS
  • Debian
  • Oracle Linux
  • Red Hat Enterprise Linux
  • Red Hat Universal Base Image (UBI)
  • Ubuntu
  • Suse Linux
  • Amazon Linux 2
  • Google Distroless

To generate a report of language package (NPM/GEM/Java/Python) vulnerabilities, the system draws vulnerability data from the NVD data feed, and vulnerability reports can be viewed using the ’non-os’ vulnerability type:

# anchorectl image vulnerabilities node:latest -t non-os

To generate a list of all vulnerabilities that can be found, regardless of whether they are against an OS or non-OS package type, the ‘all’ vulnerability type can be used:

# anchorectl image vulnerabilities node:latest -t all

Finally, for any of the above queries, these commands (and other anchorectl commands) can be passed the -o json flag to output the data in JSON format:

# anchorectl -o json image vulnerabilities node:latest -t all

Other options can be reviewed by issuing anchorectl image vulnerabilities --help at any time.

Next Steps

  • Subscribe to receive notifications when the image is updated, when the policy status changes or when new vulnerabilities are detected.

7.1.2 - Image Analysis via UI

Overview

In this section you will learn how to submit images for analysis using the user interface, and how to execute a bulk removal of pending items or previously-analyzed items from within a repository group.

Note: Only administrators and standard users with the requisite role-based access control permissions are allowed to submit items for analysis, or remove previously analyzed assets.

Getting Started

From within an authenticated session, click the Image Analysis button on the navigation bar:

alt text

You will be presented with the Image Analysis view. On the right-hand side of this view you will see the Analyze Repository and Analyze Tag buttons:

alt text

These controls allow you to add entire repositories or individual items to the Anchore analysis queue, and to also provide details about how you would like the analysis of these submissions to be handled on an ongoing basis. Both options are described below in the following sections.

Analyze a Repository

After clicking the Analyze Repository button, you are presented with the following dialog:

alt text

The following fields are required:

  • Registry—for example: docker.io
  • Repository—for example: library/centos

Provided below these fields is the Watch Tags in Repository configuration toggle. By default, when One-Time Tag Analysis is selected all tags currently present in the repository will be analyzed; once initial analysis is complete the repository will not be watched for future additions.

Setting the toggle to Automatically Check for Updates to Tags specifies that the repository will be monitored for any new tag additions that take place after the initial analysis is complete. Note that you are also able to set this option for any submitted repository from within the Image Analysis view.

Once you have populated the required fields and click OK, you will be notified of the overhead of submitting this repository by way of a count that shows the maximum number of tags detected within that repository that will be analyzed:

alt text

You can either click Cancel to abandon the repository analysis request at this point, or click OK to proceed, whereupon the specified repository will be flagged for analysis.

Max image size configuration applies to repositories added via UI. See max image size

Analyze a Tag

After clicking the Analyze Tag button, you are presented with the following dialog:

alt text

The following fields are required:

  • Registry—for example, docker.io
  • Repository—for example, library/centos
  • Tag—for example, latest

Note: Depending upon where the dialog was invoked, the above fields may be pre-populated. For example, if you clicked the Analyze Tag button while looking at a view under Image Analysis that describes a previously-analyzed repository, the name of that repository and its associated registry will be displayed in those fields.

Some additional options are provided on the right-hand side of the dialog:

  • Watch Tag—enabling this toggle specifies that the tag should be monitored for image updates on an ongoing basis after the initial analysis

  • Force Reanalysis—if the specified tag has already been analyzed, you can force re-analysis by enabling this option. You may want to force re-analysis if you decide to add annotations (see below) after the initial analysis. This option is ignored if the tag has not yet been analyzed.

  • Add Annotation—annotations are optional key-pair values that can be added to the image metadata. They are visible within the Overview tab of the Image Analysis view once the image has been analyzed, as well as from within the payload of any webhook notification from Anchore that contains image information.

Once you have populated the required fields and click OK, the specified tag will be scheduled for analysis.

Max image size configuration applies to images added via UI. See max image size

Note: Anchore will attempt to download images from any registry without requiring further configuration. However, if your registry needs authentication then the corresponding credentials will need to be defined. See Configuring Registries for more information.

View and download Vulnerability

To view the vulnerability details, follow these steps:

  1. Click on the image icon.
  2. Under the Repository column, click the hyperlink of the repository where the desired image is located.

alt text

  1. You will then see all the images in the selected repository. Under the Most Recently Analyzed Image Digest column, click on the image digest for which you want to view or download the vulnerabilities.

alt text

This takes you to the Policy compliance page.

  1. On this page click on Vulnerabilities

alt text

  1. After clicking the vulnerabilities icon, you will see a list of vulnerabilities for that image. To download the vulnerabilities, click on the Vulnerability Report icon and choose either the JSON or CSV format.

Repository Deletion

Shown below is an example of a repository view under Image Analysis:

alt text

From a repository view you can carry out actions relating to the bulk removal of items in that repository. The Analysis Cancellation / Repository Removal control is provided in this view, adjacent to the analysis controls:

alt text

After clicking this button you are presented with the following options:

  • Cancel Images Currently Pending Analysis—this option is only enabled if you have one or more tags in the repository view that are currently scheduled for analysis. When invoked, all pending items will be removed from the queue. This option is particularly useful if you have selected a repository for analysis that contains many tags, and the overall analysis operation is taking longer than initially expected.

    Note: If there is at least one item present in the repository that is not pending analysis, you will be offered the opportunity to decide if you want the repository to be watched after this operation is complete.

  • Remove Repository and Analyzed Items—In order to remove a repository from the repository view in its entirety, all items currently present within the repository must first be removed from Anchore. When invoked, all items (in any state of analysis) will be removed. If the repository is being watched, this subscription is also removed.

7.2 - Scanning Repositories

Introduction

Individual images can be added to Anchore Enterprise using the image add command. This may be performed by a CI/CD plugin such as Jenkins or manually by a user with the UI, AnchoreCTL or API.

Anchore Enterprise can also be configured to scan repositories and automatically add any tags found in the repository. This is referred to as a Repository Subscription. Once added, Anchore Enterprise will periodically check the repository for new tags and add them to Anchore Enterprise. For more details on the Repository Subscription, please see Subscriptions

Note When you add a registry to Anchore, no images are pulled automatically. This is to prevent your Anchore deployment from being overwhelmed by a very large number of images. Therefore, you should think of adding a registry as a preparatory step that allows you to then add specific repositories or tags without having to provide the access credentials for each. Because a repository typically includes a manageable number of images, when you add a repository to Anchore images, all tags in that repository are automatically pulled and analyzed by Anchore. For more information about managing registries, see Managing Registries.

Adding Repositories

The repo add command instructs Anchore Enterprise to add the specified repository watch list.


# anchorectl repo add docker.io/alpine
 ✔ Added r