This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Overview of Anchore Enterprise

What is Anchore Enterprise?

Anchore Enterprise is a software bill of materials (SBOM) - powered software supply chain management solution designed for a cloud-native world. It provides continuous visibility into supply chain security risks. Anchore Enterprise takes a developer-friendly approach that minimizes friction by embedding automation into development toolchains to generate SBOMs and accurately identify vulnerabilities, malware, misconfigurations, and secrets for faster remediation.

Gaining Visibility with SBOMs

Anchore Enterprise generates detailed SBOMs at each step in the development process, providing a complete inventory of the software components including the direct and transitive dependencies you use. Anchore Enterprise stores all SBOMs in a SBOM repository to enable ongoing monitoring of your software for new or zero-day vulnerabilities that can arise even post-deployment.

Anchore Enterprise also detects SBOM drift in the build process, issuing an alert for changes in SBOMs so they can be assessed for risk, malware, compromised software, and malicious activity.

Identifying Vulnerability and Security Issues

Starting with the SBOM, Anchore Enterprise uses multiple vulnerability feeds along with a precision vulnerability matching algorithm to pinpoint relevant vulnerabilities and minimize false positives. Anchore Enterprise also identifies malware, cryptominers, secrets, misconfigurations, and other security issues.

Automating through Policies

Anchore Enterprise includes a powerful policy engine that enables you to define guardrails and automate compliance with industry standards or internal rules. Using Anchore’s customizable policies, you can automatically identify the security issues that you care about and alert developers or create policy gates for critical issues.

1 - Anchore Enterprise Capabilities

SBOM Generation and Management

A software bill of materials (SBOM), is the foundational element that powers Anchore Enterprise’s secure management of the software supply chain. Anchore Enterprise automatically generates and analyzes comprehensive SBOMs at each step of the development lifecycle. SBOMS are stored in a repository to provide visibility into software components and dependencies as well as continuous monitoring for new vulnerabilities and risks throughout the development process and post-deployment. See SBOM Generation and Management for more information.

About Anchore Enterprise SBOMs

An SBOM is a list of software components and relevant metadata that includes packages, code-snippets, licenses, configurations, and other elements of an application.

Anchore Enterprise generates high-fidelity SBOMs by scanning container images and source code repositories. Anchore’s native SBOM format includes a rich set of metadata that is a superset of data included in SBOM standards such as SPDX and CycloneDX. Using this additional level of metadata, Anchore can identify secrets, file permissions, misconfiguration, malware, insecure practices, and more.

Anchore Enterprise SBOMs identify:

  • Open source dependencies including ecosystem type (OS, language, and other metadata)
  • Nested dependencies in archive files (WAR files, JAR files and more)
  • Package details such as name, version, creator, and license information
  • Filesystem metadata such as the file name, size, permissions, creation time, modification time, and hashes
  • Malware
  • Secrets, keys, and credentials

Anchore Enterprise supported ecosystems

Anchore Enterprise supports the following packaging ecosystems when identifying SBOM content. The Operating System category captures Linux packaging ecosystems. The Binary detector will inspect content to identify binaries that were installed outside of packaging ecosystems.

  • Operating System
    • RPM
    • DEB
    • APK
  • NPM
  • Ruby Gems
  • Python
  • Java
  • NuGet
  • Golang
  • Binaries
    • Apache httpd
    • BusyBox
    • Consul
    • Golang
    • HAProxy
    • Helm
    • Java
    • Memcached
    • Nodejs
    • PHP
    • Perl
    • PostgreSQL
    • Python
    • Redis
    • Rust
    • Traefik

How Anchore Enterprise Uses SBOMs

Identify Vulnerabilities and Risk for Remediation

Anchore Enterprise generates detailed SBOMs at each stage of the software development lifecycle and stores them in a centralized repository to provide visibility into components and open source dependencies. These SBOMs are analyzed for vulnerabilities, malware, secrets (embedded passwords and credentials), misconfigurations, and other risks. Because SBOMs are stored in a repository, users can then continually monitor SBOMs for new vulnerabilities that arise, even post-deployment.

Detect SBOM Drift

Anchore Enterprise detects SBOM drift in the build process, identifying changes in SBOMs so they can be assessed for new risks or malicious activity. Users can set policy rules that alert them when components are added, changed, or removed so that they can quickly identify new vulnerabilities, developer errors, or malicious efforts to infiltrate builds. See SBOM Drift for more information.

Meet Compliance Requirements

Using the Anchore Enterprise UI or API, users can review SBOMs, generate reports, and export SBOMs as a JSON file. Anchore Enterprise can also export aggregated SBOMs for entire applications that can then be shared externally to meet customer and federal compliance requirements.

Customized Policy Rules

Anchore Enterprise’s high-fidelity SBOMs provide users with a rich set of metadata that can be used in customized policies.

Reduce False Positives

The extensive information provided in SBOMs generated by Anchore Enterprise allows for more accurate vulnerability matching for higher accuracy and reduced false positives.

Vulnerability and Security Scanning

Vulnerability and security scanning is an essential part of any vulnerability management strategy. Anchore Enterprise enables you to scan for vulnerabilities and security risks at any stage of your software development process, including source code repositories, CI/CD pipelines, container registries, and container runtime environments. By scanning at each stage in the process, you will find vulnerabilities and other security risks earlier and avoid delaying software delivery. See Anchore Enterprise Vulnerability Scanner for more information.

Continuous Scanning and Analysis

Anchore Enterprise provides continuous and automated scanning of an application’s code base, including related artifacts such as containers and code repositories. Anchore Enterprise starts the scanning process by generating and storing a high-fidelity SBOM that identifies all of the open source and proprietary components and their direct and transitive dependencies. Anchore uses this detailed SBOM to accurately identify vulnerabilities and security risks.

Identifying Zero-Day Vulnerabilities

When a zero-day vulnerability arises, Anchore Enterprise can instantly identify which components and applications are impacted by simply re-analyzing your stored SBOMs. You don’t need to re-scan applications or components.

Multiple Vulnerability Feeds

Anchore Enterprise uses a broad set of vulnerability feed sources, including the National Vulnerability Database, GitHub Security Advisories, feeds for popular Linux distros and packages, and an Anchore-curated dataset for suppression of known false-positive vulnerability matches. See Feeds Overview for a full list of feed sources.

Precision Vulnerability Matching

Anchore Enterprise applies a best-in-class precision matching algorithm to select vulnerability data from the most accurate feed source. For example, when Anchore’s detailed SBOM data identifies that there is a specific Linux distro, such as RHEL, Anchore Enterprise will automatically use that vendor’s feed source instead of reporting every Linux vulnerability. Anchore’s precision vulnerability matching algorithm reduces false positives and false negatives, saving developer time. See the Managing False Positives section within this topic for additional ways that Anchore Enterprise reduces false positives.

Vulnerability Management and Remediation

Focusing solely on identifying vulnerability and security issues without remediation is not good enough for today’s modern DevSecOps teams. Anchore Enterprise combines the power of a rich set of SBOM metadata, reporting, and policy management capabilities to enable customers to remediate issues with the flexibility and granularity needed to mitigate disruption or slow down software production.

Managing False Positives

Anchore Enterprise provides a number of innovative capabilities to help reduce the number of false positives and optimize the signal-to-noise ratio. It starts with accurate component identification through Anchore’s high-fidelity SBOMs and a precision vulnerability matching algorithm for fewer false positives. In addition, allowlists and temporary allowlists provide for exceptions, reducing ongoing alerts. Lastly, Anchore Enterprise enables users to correct the false positives to avoid being raised in subsequent scans. “Corrections” help increase results in accuracy over time and lower signal-to-noise ratio.

Flexible Policy Enforcement

Anchore Enterprise enables users to define automated rules that indicate which vulnerabilities violate their organizations’ policies. For example, an organization may raise policy violations for vulnerabilities scored as Critical or High that have a fix available. These policy violations can generate alerts and notifications or be used to stop builds in the CI/CD pipeline or prevent code from moving to production. Policy enforcement can be applied at any stage in the development process, from the selection and usage of open source components through the build, staging, and deployment process. See Policy for more information.

Streamlined Remediation

Anchore Enterprise provides capabilities to automatically alert developers of issues through their existing tools, such as Jira or Slack. It also lets users define actionable remediation workflows with automated remediation recommendations.

Open Source Security, Dependencies, and Licenses

Anchore Enterprise gives users the ability to identify and track open source dependencies that are incorporated at any stage in the software lifecycle. Anchore Enterprise scans source code repositories, CI/CD pipelines, and container registries to generate SBOMs that include both direct and transitive dependencies and to identify exactly where those dependencies are found.

Anchore Enterprise also identifies the relevant open source licenses and enables users to ensure that the open source components used along with their dependencies are compliant with all license requirements. License policies are customizable and can be tailored to fit each organization’s open source requirements.

Compliance with Standards

Anchore Enterprise provides a flexible policy engine that enables you to identify and alert on the most important vulnerabilities and security issues, and to meet internal or external compliance requirements. You can leverage out-of-the-box policy packs for common compliance standards, or create custom policies for your organization’s needs. You can define rules against the most complete set of metadata and apply policies at the most granular level with different rules for different applications, teams, and pipelines.

Anchore offers out-of-the-box policy packs to help you comply with NIST and CIS standards that are foundational for such industry-specific standards as HIPAA and PCI DSS.

Flexible Policy Enforcement

Policies are flexible and provide both notifications and gates to prevent code from moving along the development pipeline or into production based on your criteria. You can define policy rules for image and file metadata, file contents, licenses, and vulnerability scoring. And you can define unique rules for each team, for each application, and for each pipeline.

Automated Rules

Anchore Enterprise enables users to define automated rules that indicate which vulnerabilities violate their organization’s policies. For example, an organization may raise policy violations for vulnerabilities scored as Critical or High that have a fix available. These policy violations can generate alerts and notifications or be used to stop builds in the CI/CD pipeline or prevent code from moving to production. You can apply policy enforcement at any stage in the development process from the selection and usage of open source components through the build, staging, and deployment process.

Anchore Enterprise Policy Packs

Anchore Enterprise provides the following out-of-the-box Policy Packs that automate checks for common compliance programs, standards, and laws including CIS, NIST, FedRAMP, CISA vulnerabilities, HIPAA, PCI DSS, and more. Policy Packs comprise bundled policies and are flexible so that you can modify them to meet your organization’s requirements.

  • CIS Policy Pack

    The CIS Policy Pack validates a subset of security and compliance checks against container image best practices and NIST 800-53 and NIST 800-190 security controls and requirements. To expand CIS security controls, you can customize the policies in accordance with CIS Benchmarks.

  • CISA Vulnerabilities Policy Pack

    The CISA Vulnerabilities Policy Pack validates images against the CISA Known Exploited Vulnerabilities Catalog that is maintained by CISA and the U.S. Department of Homeland Security.

  • DISA Image Creation and Deployment Guide Policy Pack

    The DISA Image Creation and Deployment Guide Policy Pack provides security and compliance checks that align with specific NIST 800-53 and NIST 800-190 security controls and requirements as described in the Department of Defense (DoD) Container Image Creation and Deployment Guide.

  • DoD Iron Bank Policy Pack

    The DoD Iron Bank Policy Pack validates images against DoD security and compliance requirements in alignment with U.S. Air Force security standards at Platform One and Iron Bank.

  • FedRAMP Policy Pack

    The FedRAMP Policy Pack validates whether container images scanned by Anchore Enterprise are compliant with the FedRAMP Vulnerability Scanning Requirements and also validates them against FedRAMP controls specified in NIST 800-53 Rev 5 and NIST 800-190.

  • SSDF Policy Pack

    The NIST 800-218, or SSDF, policy pack provides evidence against container images scanned by Anchore Enterprise. This policy pack is meant to be used during evidence collection during a NIST 800-218 compliance effort, not as a policy enforcement mechanism.

2 - Anchore Enterprise Architecture

Anchore Enterprise is a distributed application that runs on supported container runtime platforms. The product is deployed as a series of containers that provide services whose functions are made available through APIs. These APIs can be consumed directly via included clients such as the AnchoreCTL and GUI or via out-of-the-box integrations for use with container registries, Kubernetes, and CI/CD tooling. Alternatively, users can interact directly with the APIs for custom integrations.

Services

APIs

  • Enterprise Public API

    The Enterprise API is the primary RESTful API for Anchore Enterprise and is the one used by AnchoreCTL, the GUI, and other plugins. This API is used to upload data such as a software bill of materials (SBOM) and container images, execute functions and retrieve information (SBOM data, vulnerabilities and the results of policy evaluations).The API also exposes the user and account management functions. See Using the Anchore API for more information.

Stateful Services

Vulnerability Feed Service

The Anchore Enterprise Feed Service downloads vulnerability data from public sources (OS vendors, NVD, and others), normalizes it in a consistent format, and stores it in a local database. This is the only service that requires internet access and so enables air-gapping of all services outlined below. Data sources are defined by individual drivers that can be enabled or disabled by the user with a download frequency also defined by the user. This data is then used as part of the policy engine when performing vulnerability scanning and can also be queried directly through the Enterprise API. See Anchore Enterprise Feeds for more information.

Policy Engine

The policy engine is responsible for loading an SBOM and associated content and then evaluating it against a set of policy rules. This resulting policy evaluation is then passed to the Catalog service. The policies are stored as a series of JSON documents that can be uploaded and downloaded via the Enterprise API or edited via the GUI.

Catalog

The catalog is the primary state manager of the system and provides data access to system services from the backend database service (PostgreSQL).

SimpleQueue

The SimpleQueue is another PostgreSQL-backed queue service that the other components use for task execution, notifications, and other asynchronous operations.

Workers

Analyzers

An Analyzer is the component that generates an SBOM from an artifact or source repo (which may be passed through the API or pulled from a registry), performs the vulnerability and policy analysis, and stores the SBOM and the results of the analysis in the organization’s Anchore Enterprise repository. Alternatively the AnchoreCTL client can be used to locally scan and generate the SBOM and then pass the SBOM to the analyzers via the API. Each Analyzer can process one container image or source repo at a time. You can increase the number of Analyzers (within the limits of your contract) to increase the throughput for the system in order to process multiple artifacts concurrently.

Clients

AnchoreCTL

AnchoreCTL is a Go-based command line client for Anchore Enterprise. It can be used to send commands to the backend API as part of manual or automated operations. It can also be used to generate SBOM content that is local to the machine it is run on.

AnchoreCTL is the recommended client for developing custom integrations with CI/CD systems or other situations where local processing of content needs to be performed before being passed to the Enterprise API.

Anchore Enterprise GUI

The Anchore Enterprise GUI is a front end to the API services and simplifies many of the processes associated with creating policies, viewing SBOMs, creating and running reports, and configuring the overall system (notifications, users, registry credentials, and more).

Integrations

Kubernetes Admission Controller

The Kubernetes Admission Controller is a plugin that can be used to intercept a container image as it is about to be deployed to Kubernetes. The image is passed to Anchore Enterprise which analyzes it to determine if it meets the organization’s policy rules. The policy evaluation result can then allow, warn, or block the deployment.

Kubernetes and ECS Runtime Inventory

anchore-k8s-invetory and anchore-ecs-inventory are agents that creates an ongoing inventory of the images that are running in a Kubernetes or ECS cluster. The agentes run inside the runtime environment (under a service account) and connects to the local runtime API. The agents poll the API on an interval to retrieve a list of container images that are currently in use.

Third-Party Plugins

Anchore provides a number of out-of-the-box plugins for CI/CD systems such as Jenkins, CircleCI, and others. These plugins are typically based on AnchoreCtl and allow an SBOM to be generated locally on a CI worker node before being uploaded to Anchore Enterprise for an evaluation.

Multi-Tenancy

Accounts

Accounts in Anchore Enterprise are a boundary that separates data, policies, notifications, and users into a distinct domain. An account can be mapped to a team, project, or application that needs its own set of policies applied to a specific set of content. Users may be granted access to multiple accounts.

Users

Users are local to an account and can have roles as defined by RBAC. Usernames must be unique across the entire deployment to enable API-based authentication requests. Certain users can be configured such that they have the ability to switch context between accounts, akin to a super user account.

3 - Concepts

How does Anchore Enterprise work?

Anchore takes a data-driven approach to analysis and policy enforcement. The system has the following discrete phases for each image analyzed:

  1. Fetch the image content and extract it, but never execute it.
  2. Analyze the image by running a set of Anchore analyzers over the image content to extract and classify as much metadata as possible.
  3. Save the resulting analysis in the database for future use and audit.
  4. Evaluate policies against the analysis result, including vulnerability matches on the artifacts discovered in the image.
  5. Update to the latest external data used for policy evaluation and vulnerability matches (feed sync), and automatically update image analysis results against any new data found upstream.
  6. Notify users of changes to policy evaluations and vulnerability matches.

Repeat step 5 and 6 on intervals to ensure you have the latest external data and updated image evaluations.

alt text

The primary interface is a RESTful API that provides mechanisms to request analysis, policy evaluation, and monitoring of images in registries as well as query for image contents and analysis results. Anchore Enterprise also provides a command-line interface (CLI), and its own container.

The following modes provide different ways to use Anchore within the API:

  • Interactive Mode - Use the APIs to explicitly request an image analysis, or get a policy evaluation and content reports. The system only performs operations when specifically requested by a user.
  • Watch Mode - Use the APIs to configure Anchore Enterprise to poll specific registries and repositories/tags to watch for new images, and then automatically pull and evaluate them. The API sends notifications when a state changes for a tag’s vulnerability or policy evaluation.

Anchore can be easily integrated into most environments and processes using these two modes of operation.

Next Steps

Now let’s get familiar with Images in Anchore.

3.1 - Analyzing Images

Once an image is submitted to Anchore Enterprise for analysis, Anchore Enterprise will attempt to retrieve metadata about the image from the Docker registry and, if successful, will download the image and queue the image for analysis.

Anchore Enterprise can run one or more analyzer services to scale out processing of images. The next available analyzer worker will process the image.

alt text

During analysis, every package, software library, and file are inspected, and this data is stored in the Anchore database.

Anchore Enterprise includes a number of analyzer modules that extract data from the image including:

  • Image metadata
  • Image layers
  • Operating System Package Data (RPM, DEB, APKG)
  • File Data
  • Ruby Gems
  • Node.JS NPMs
  • Java Archives
  • Python Packages
  • .NET NuGet Packages
  • File content

Once a tag has been added to Anchore Enterprise, the repository will be monitored for updates to that tag. See Image and Tag Watchers for more information about images and tags.

Any updated images will be downloaded and analyzed.

Next Steps

Now let’s get familiar with the Image Analysis Process.

3.1.1 - Base and Parent Images

A Docker or OCI image is composed of layers. These layers may be inherited from another image or created during the build of a specific image as defined by the instructions in a Dockerfile or other build process.

The ancestry of an image shows the image’s parent image(s) and base image. As defined by the Docker documentation, a parent of an image is the image used to start the build of the current image, typically the image identified in the FROM directive in the Dockerfile. If the parent image is SCRATCH then the image is considered a base image.

Anchore Enterprise provides an API call to retrieve all parents and bases of an image. This call dynamically computes the ancestry set using images that have already been analyzed by the system and returns a list of image digests and their respective layers. The ancestry computation is done by matching exact layer digests between the requested image and other images in the system.

If an image X is a parent of image Y, then image X’s layers will be the first N layers of image Y where N in the number of layers in image X.

A case where an image may have multiple results in its ancestry is as follows:

A base distro image, for example debian:10:

FROM scratch
...

An application container image from that debian image, for example a node.js image let’s call mynode:latest :

FROM debian:10

# Install nodejs

The application image itself built from the framework container, let’s call it myapp:v1 :

FROM mynode:latest
COPY ./app /
...

In this case, the parent image of myapp:v1 is mynode:latest and its base is debian:10. Anchore will return each of those images with their matching layers in the API call. See the API docs for more information on the specifics of the GET /v2/images/{digest}/ancestors API call itself.

The returned images can be used for subsequent calls to the base image comparison APIs for vulnerabilities and policy evaluations allowing you to determine which findings for an image are inherited from a specific parent or base image.

Comparing an Image with its Base or Parent

Anchore Enterprise provides a mechanism to compare the policy checks and security vulnerabilities of an image with those of a base image. This allows clients to

  • filter out results that are inherited from a base image and focus on the results relevant to the application image
  • reverse the focus and examine the base image for policy check violations and vulnerabilities which could be a deciding factor in choosing the base image for the application

To read more about the base comparison features, jump to

3.1.1.1 - Compare Base Image Policy Checks

This feature provides a mechanism to compare the policy checks for an image with those of a base image. You can read more about base images and how to find them here. Base comparison uses the same policy and tag to evaluate both images to ensure a fair comparison. The API yields a response similar to the policy checks API with an additional element within each triggered gate check to indicate whether the result is inherited from the base image.

Usage

This functionality is currently available via the Enterprise UI and API. Watch this space as we add base comparison support in other tools

API

Refer to API Access section for the API specification. The API route for base comparison is GET /enterprise/images/{imageDigest}/check. This API exposes similar path and query parameters as image policy check API GET /images/{imageDigest}/check plus an optional query parameter for supplying the digest of the base image. If the base digest is omitted, the system falls back to evaluating image policy checks without comparing the results to the base image.

Example request using curl to retrieve policy check for an image digest sha256:xyz and tag p/q:r and compare the results to a base image digest sha256:abc

curl -X GET -u {username:password} "http://{servername:port}/v2/images/sha256:xyz/check?tag=p/q:r&base_digest=sha256:abc"

Example output:

[
  {
    "sha256:xyz": {
      "p/q:r": [
        {
          "detail": {
            "result": {
              "base_image_digest": "sha256:abc",
              "result": {
                "123": {
                  "result": {
                    "final_action": "stop",
                    "header": [
                      "Image_Id",
                      "Repo_Tag",
                      "Trigger_Id",
                      "Gate",
                      "Trigger",
                      "Check_Output",
                      "Gate_Action",
                      "Whitelisted",
                      "Policy_Id",
                      "Inherited_From_Base"
                    ],
                    "row_count": 2,
                    "rows": [
                      [
                        "123",
                        "p/q:r",
                        "41cb7cdf04850e33a11f80c42bf660b3",
                        "dockerfile",
                        "instruction",
                        "Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check",
                        "warn",
                        false,
                        "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
                        true
                      ],
                      [
                        "123",
                        "p/q:r",
                        "CVE-2019-5435+curl",
                        "vulnerabilities",
                        "package",
                        "MEDIUM Vulnerability found in os package type (APKG) - curl (CVE-2019-5435 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5435)",
                        "warn",
                        false,
                        "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
                        false
                      ]
                    ]
                  }
                },
                ...
              },
              ...
            },
            ...
          },
          ...
        }
      ]
    }
  }
]

Note that header element Inherited_From_Base is a new column in the API response added to support base comparison. The corresponding row element in each item of rows uses a boolean value to indicate whether the gate result is present in the base image. In the above example

  • Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check is triggered by both images and hence Inherited_From_Base column is marked true
  • MEDIUM Vulnerability found in os package type (APKG) - curl (CVE-2019-5435 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5435) is not triggered by the base image and therefore the value of Inherited_From_Base column is false

3.1.1.2 - Compare Base Image Security Vulnerabilities

This feature provides a mechanism to compare the security vulnerabilities detected in an image with those of a base image. You can read more about base images and how to find them here. The API yields a response similar to vulnerabilities API with an additional element within each result to indicate whether the result is inherited from the base image.

Usage

This functionality is currently available via the Enterprise UI and API. Watch this space as we add base comparison support in other tools

API

Refer to API Access section for the API specification. The API route for base comparison is GET /enterprise/images/{imageDigest}/vuln/{vtype}. This API exposes similar path and query parameters as the security vulnerabilities API GET /images/{imageDigest}/vuln/{vtype} plus an optional query parameter for supplying the digest of the base image. If the base digest is omitted, the system falls back to retrieving security vulnerabilities in the image without comparing the results to the base image.

Example request using curl to retrieve security vulnerabilities for an image digest sha:xyz and compare the results to a base image digest sha256:abc

curl -X GET -u {username:password} "http://{servername:port}/v2/images/sha256:xyz/vuln/all?base_digest=sha256:abc"

Example output:

{
  "base_digest": "sha256:abc",
  "image_digest": "sha256:xyz",
  "vulnerability_type": "all",
  "vulnerabilities": [
    {
      "feed": "vulnerabilities",
      "feed_group": "alpine:3.12",
      "fix": "7.62.0-r0",
      "inherited_from_base": true,
      "nvd_data": [
        {
          "cvss_v2": {
            "base_score": 6.4,
            "exploitability_score": 10.0,
            "impact_score": 4.9
          },
          "cvss_v3": {
            "base_score": 9.1,
            "exploitability_score": 3.9,
            "impact_score": 5.2
          },
          "id": "CVE-2018-16842"
        }
      ],
      "package": "libcurl-7.61.1-r3",
      "package_cpe": "None",
      "package_cpe23": "None",
      "package_name": "libcurl",
      "package_path": "pkgdb",
      "package_type": "APKG",
      "package_version": "7.61.1-r3",
      "severity": "Medium",
      "url": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16842",
      "vendor_data": [],
      "vuln": "CVE-2018-16842"
    },
    {
      "feed": "vulnerabilities",
      "feed_group": "alpine:3.12",
      "fix": "2.4.46-r0",
      "inherited_from_base": false,
      "nvd_data": [
        {
          "cvss_v2": {
            "base_score": 5.0,
            "exploitability_score": 10.0,
            "impact_score": 2.9
          },
          "cvss_v3": {
            "base_score": 7.5,
            "exploitability_score": 3.9,
            "impact_score": 3.6
          },
          "id": "CVE-2020-9490"
        }
      ],
      "package": "apache2-2.4.43-r0",
      "package_cpe": "None",
      "package_cpe23": "None",
      "package_name": "apache2",
      "package_path": "pkgdb",
      "package_type": "APKG",
      "package_version": "2.4.43-r0",
      "severity": "Medium",
      "url": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9490",
      "vendor_data": [],
      "vuln": "CVE-2020-9490"
    }
  ]
}

Note that inherited_from_base is a new element in the API response added to support base comparison. The assigned boolean value indicates whether the exact vulnerability is present in the base image. In the above example

  • CVE-2018-16842 affects libcurl-7.61.1-r3 package in both images, hence inherited_from_base is marked true
  • CVE-2019-5482 affects apache2-2.4.43-r0 package does not affect the base image and therefore inherited_from_base is set to false

3.1.2 - Image Analysis Process

There are two types of image analysis:

  1. Centralized Analysis
  2. Distributed Analysis

Image analysis is performed as a distinct, asynchronous, and scheduled task driven by queues that analyzer workers periodically poll.

Image analysis_status states:

stateDiagram [*] --> not_analyzed: analysis queued not_analyzed --> analyzing: analyzer starts processing analyzing --> analyzed: analysis completed successfully analyzing --> analysis_failed: analysis fails analyzing --> not_analyzed: re-queue by timeout or analyzer shutdown analysis_failed --> not_analyzed: re-queued by user request analyzed --> not_analyzed: re-queued for re-processing by user request

Centralized Analysis

The analysis process is composed of several steps and utilizes several system components. The basic flow of that task as shown in the following example:

Centralized analysis high level summary:

sequenceDiagram participant A as AnchoreCTL participant R as Registry participant E as Anchore Deployment A->>E: Request Image Analysis E->>R: Get Image content R-->>E: Image Content E->>E: Analyze Image Content (Generate SBOM and secret scans etc) and store results E->>E: Scan sbom for vulns and evaluate compliance

The analyzers operate in a task loop for analysis tasks as shown below:

alt text

Adding more detail, the API call trace between services looks similar to the following example flow:

alt text

Distributed Analysis

In distributed analysis, the analysis of image content takes place outside the Anchore deployment and the result is imported into the deployment. The image has the same state machine transitions, but the ‘analyzing’ processing of an imported analysis is the processing of the import data (vuln scanning, policy checks, etc) to prepare the data for internal use, but does not download or touch any image content.

High level example with AnchoreCTL:

sequenceDiagram participant A as AnchoreCTL participant R as Registry/Docker Daemon participant E as Anchore Deployment A->>R: Get Image content R-->>A: Image Content A->>A: Analyze Image Content (Generate SBOM and secret scans etc) A->>E: Import SBOM, secret search, fs metadata E->>E: Scan sbom for vulns and evaluate compliance

Next Steps

Now let’s get familiar with Watching Images and Tags with Anchore.

3.1.2.1 - Malware Scanning

Overview

Anchore Enterprise now supports the use of the open-source ClamAV malware scanner to detect malicious code embedded in container images. This scan occurs only at analysis time when the image content itself is available, and the scan results are available via the API as well as for consumption in new policy gates to allow gating of image with malware findings.

Signature DB Updates

Each analyzer service will run a malware signature update before analyzing each image. This does add some latency to the overall analysis time but ensures the signatures are as up-to-date as possible for each image analyzed. The update behavior can be disabled if you prefer to manage the freshness of the db via another route, such as a shared filesystem mounted to all analyzer nodes that is updated on a schedule. See the configuration section for details on disabling the db update.

The status of the db update is present in each scan output for each image.

Scan Results

The malware content type is a list of scan results. Each result is the run of a malware scanner, by default clamav.

The list of files found to contain malware signature matches is in the findings property of each scan result. An empty array value indicates no matches found.

The metadata property provides generic metadata specific to the scanner. For the ClamAV implementation, this includes the version data about the signature db used and if the db update was enabled during the scan. If the db update is disabled, then the db_version property of the metadata will not have values since the only way to get the version metadata is during a db update.

{
    "content": [
        {
            "findings": [
                {
                    "path": "/somebadfile",
                    "signature": "Unix.Trojan.MSShellcode-40"
                },
                {
                    "path": "/somedir/somepath/otherbadfile",
                    "signature": "Unix.Trojan.MSShellcode-40"
                }
            ],
            "metadata": {
                "db_update_enabled": true,
                "db_version": {
                    "bytecode": "331",
                    "daily": "25890",
                    "main": "59"
                }
            },
            "scanner": "clamav"
        }
    ],
    "content_type": "malware",
    "imageDigest": "sha256:0eb874fcad5414762a2ca5b2496db5291aad7d3b737700d05e45af43bad3ce4d"
}

Policy Rules

A policy gate called malware is available with two new triggers:

  • scans trigger will fire for each file and signature combination found in the image so that you can fail an evaluation of an image if malware was detected during the analysis scans
  • scan_not_run trigger will fire if there are no malware scans (even empty) available for the image

See policy checks for more details

3.1.2.2 - Content Hints

Anchore Enterprise includes the ability to read a user-supplied ‘hints’ file to allow users to add software artifacts to Anchore’s analysis report. The hints file, if present, contains records that describe a software package’s characteristics explicitly, and are then added to the software bill of materials (SBOM). For example, if the owner of a CI/CD container build process knows that there are some software packages installed explicitly in a container image, but Anchore’s regular analyzers fail to identify them, this mechanism can be used to include that information in the image’s SBOM, exactly as if the packages were discovered normally.

Hints cannot be used to modify the findings of Anchore’s analyzer beyond adding new packages to the report. If a user specifies a package in the hints file that is found by Anchore’s image analyzers, the hint is ignored and a warning message is logged to notify the user of the conflict.

Configuration

See Configuring Content Hints

Once enabled, the analyzer services will look for a file with a specific name, location and format located within the container image - /anchore_hints.json.
The format of the file is illustrated using some examples, below.

OS Package Records

OS Packages are those that will represent packages installed using OS / Distro style package managers. Currently supported package types are rpm, dpkg, apkg for RedHat, Debian, and Alpine flavored package managers respectively. Note that, for OS Packages, the name of the package is unique per SBOM, meaning that only one package named ‘somepackage’ can exist in an image’s SBOM, and specifying a name in the hints file that conflicts with one with the same name discovered by the Anchore analyzers will result in the record from the hints file taking precedence (override).

  • Minimum required values for a package record in anchore_hints.json
	{
	    "name": "musl",
	    "version": "1.1.20-r8",
	    "type": "apkg"
	}
  • Complete record demonstrating all of the available characteristics of a software package that can be specified
	{
	    "name": "musl",
	    "version": "1.1.20",
	    "release": "r8",
	    "origin": "Timo Ter\u00e4s <[email protected]>",
	    "license": "MIT",
	    "size": "61440",
	    "source": "musl",
	    "files": ["/lib/ld-musl-x86_64.so.1", "/lib/libc.musl-x86_64.so.1", "/lib"],
	    "type": "apkg"
	}

Non-OS/Language Package Records

Non-OS / language package records are similar in form to the OS package records, but with some extra/different characteristics being supplied, namely the location field. Since multiple non-os packages can be installed that have the same name, the location field is particularly important as it is used to distinguish between package records that might otherwise be identical. Valid types for non-os packages are currently java, python, gem, npm, nuget, go, binary.
For the latest types that are available, see the anchorectl image content <someimage> output, which lists available types for any given deployment of Anchore Enterprise.

  • Minimum required values for a package record in anchore_hints.json
	{
	    "name": "wicked",
	    "version": "0.6.1",  
	    "type": "gem"
	}
  • Complete record demonstrating all of the available characteristics of a software package that can be specified
	{
	    "name": "wicked",
	    "version": "0.6.1",
	    "location": "/app/gems/specifications/wicked-0.9.0.gemspec",
	    "origin": "schneems",
	    "license": "MIT",
	    "source": "http://github.com/schneems/wicked",
	    "files": ["README.md"],
	    "type": "gem"	    
	}

Putting it all together

Using the above examples, a complete anchore_hints.json file, when discovered by Anchore Enterprise located in /anchore_hints.json inside any container image, is provided here:

{
    "packages": [
	{
	    "name": "musl",
	    "version": "1.1.20-r8",
	    "type": "apkg"
	},
	{
	    "name": "wicked",
	    "version": "0.6.1",  
	    "type": "gem"
	}
    ]
}

With such a hints file in an image based for example on alpine:latest, the resulting image content would report these two package/version records as part of the SBOM for the analyzed image, when viewed using anchorectl image content <image> -t os and anchorectl image content <image> -t gem to view the musl and wicked package records, respectively.

Note about using the hints file feature

The hints file feature is disabled by default, and is meant to be used in very specific circumstances where a trusted entity is entrusted with creating and installing, or removing an anchore_hints.json file from all containers being built. It is not meant to be enabled when the container image builds are not explicitly controlled, as the entity that is building container images could override any SBOM entry that Anchore would normally discover, which affects the vulnerability/policy status of an image. For this reason, the feature is disabled by default and must be explicitly enabled in configuration only if appropriate for your use case .

3.1.3 - Image and Tag Watchers

Overview

Anchore has the capability to monitor external Docker Registries for updates to tags as well as new tags. It also watches for updates to vulnerability databases and package metadata (the “Feeds”).

Repository Updates: New Tags

The process for monitoring updates to repositories, the addition of new tag names, is done on a duty cycle and performed by the Catalog component(s). The scheduling and tasks are driven by queues provided by the SimpleQueue service.

  1. Periodically, controlled by the cycle_timers configuration in the config.yaml of the catalog, a process is triggered to list all the Repository Subscription records in the system and for each record, add a task to a specific queue.
  2. Periodically, also controlled by the cycle_timers config, a process is triggered to pick up tasks off that queue and process repository scan tasks. Each task looks approximately like the following:

alt text

The output of this process is new tag_update subscription records, which are subsequently processed by the Tag Update handlers as described below. You can view the tag_update subscriptions using AnchoreCTL:

anchorectl subscription list -t tag_update

Tag Updates: New Images

To detect updates to tags, mapping of a new image digest to a tag name, Anchore periodically checks the registry and downloads the tag’s image manifest to compare the computed digests. This is done on a duty cycle for every tag_update subscription record. Therefore, the more subscribed tags exist in the system, the higher the load on the system to check for updates and detect changes. This processing, like repository update monitoring, is performed by the Catalog component(s).

The process, the duty-cycle of which is configured in the cycle_timers section of the catalog config.yaml is described below:

alt text

As new updates are discovered, they are automatically submitted to the analyzers, via the image analysis internal queue, for processing.

The overall process and interaction of these duty cycles works like:

alt text

Next Steps

Now let’s get familiar with Policy in Anchore.

3.1.4 - Analysis Archive

Anchore Enterprise is a data intensive system. Storage consumption grows with the number of images analyzed, which leaves the following options for storage management:

  1. Over-provisioning storage significantly
  2. Increasing capacity over time, resulting in downtime (e.g. stop system, grow the db volume, restart)
  3. Manually deleting image analysis to free space as needed

In most cases, option 1 only works for a while, which then requires using 2 or 3. Managing storage provisioned for a postgres DB is somewhat complex and may require significant data copies to new volumes to grow capacity over time.

To help mitigate the storage growth of the db itself, Anchore Enterprise already provides an object storage subsystem that enables using external object stores like S3 or Swift to offload the unstructured data storage needs to systems that are more growth tolerant and flexible. This lowers the db overhead but does not fundamentally address the issue of unbounded growth in a busy system.

The Analysis Archive extends the object store even further by providing a system-managed way to move an image analysis and all of its related data (policy evaluations, tags, annotations, etc) and moving it to a location outside of the main set of images such that it consumes much less storage in the database when using an object store, perserves the last state of the image, and supports moving it back into the main image set if it is needed in the future without requiring that the image itself be reanalzyed–restoring from the archive does not require the actual docker image to exist at all.

To facilitate this, the system can be thought of as two sets of analysis with different capabilities and properties:

Analysis Data Sets

Working Set Images

The working set is the set of images in the ‘analyzed’ state in the system. These images are stored in the database, optionally with some data in an external object store. Specifically:

  • State = ‘analyzed’
  • The set of images available from the /images api routes
  • Available for policy evaluation, content queries, and vulnerability updates

Archive Set Images

The archive set of images are image analyses that reside almost entirely in the object store, which can be configured to be a different location than the object store used for the working set, with minimal metadata in the anchore DB necessary to track and restore the analysis back into the working set in the future. An archived image analysis preserves all the annotations, tags, and metadata of the original analysis as well as all existing policy evaluation histories, but are not updated with new vulnerabilities during feed syncs and are not available for new policy evaluations or content queries without first being restored into the working set.

  • Not listed in /images API routes
  • Cannot have policy evaluations executed
  • No vulnerability updates automatically (must be restored to working set first)
  • Available from the /archives/images API routes
  • Point-in-time snapshot of the analysis, policy evaluation, and vulnerability state of an image
  • Independently configurable storage location (analysis_archive property in the services.catalog property of config.yaml)
  • Small db storage consumption (if using external object store, only a few small records, bytes)
  • Able to use different type of storage for cost effectiveness
  • Can be restored to the working set at any time to restore full query and policy capabilities
  • The archive object store is not used for any API operations other than the restore process

An image analysis, identified by the digest of the image, may exist in both sets at the same time, they are not mutually exclusive, however the archive is not automatically updated and must be deleted an re-archived to capture updated state from the working set image if desired.

Benefits of the Archive

Because archived image analyses are stored in a distinct object store and tracked with their own metadata in the db, the images in that set will not impact the performance of working set image operations such as API operations, feed syncs, or notification handling. This helps keep the system responsive and performant in cases where the set of images that you’re interested in is much smaller than the set of images in the system, but you don’t want to delete the analysis because it has value for audit or historical reasons.

  1. Leverage cheaper and more scalable cloud-based storage solutions (e.g. S3 IA class)
  2. Keep the working set small to manage capacity and api performance
  3. Ensure the working set is images you actively need to monitor without losing old data by sending it to the archive

Automatic Archiving

To help facilitate data management automatically, Anchore supports rules to define which data to archive and when based on a few qualities of the image analysis itself. These rules are evaluated periodically by the system.

Anchore supports both account-scoped rules, editable by users in the account, and global system rules, editable only by the system admin account users. All users can view system global rules such that they can understand what will affect their images but they cannot update or delete the rules.

The process of automatic rule evaluation:

  1. The catalog component periodically (daily by default, but configurable) will run through each rule in the system and identify image digests should be archived according to either account-local rules or system global rules.

  2. Each matching image analysis is added to the archive.

  3. Each successfully added analysis is deleted from the working set.

  4. For each digest migrated, a system event log entry is created, indicating that the image digest was moved to the archive.

Archive Rules

The rules that match images are provide 3 selectors:

  1. Analysis timestamp - the age of the analysis itself, as expressed in days
  2. Source metadata (registry, repo, tag) - the values of the registry, repo, and tag values
  3. Tag history depth – the number of images mapped to a tag ordered by detected_at timestamp (the time at which the system observed the mapping of a tag to a specific image manifest digest)

Rule scope:

  • global - these rules will be evaluated against all images and all tags in the system, regardless of the owning account. (system_global = true)
  • account - these rules are only evaluated against the images and tags of the account which owns the rule. (system_global = false)

Example Rule:

{
    "analysis_age_days": 10,
    "created_at": "2019-03-30T22:23:50Z",
    "last_updated": "2019-03-30T22:23:50Z",
    "rule_id": "67b5f8bfde31497a9a67424cf80edf24",
    "selector": {
        "registry": "*",
        "repository": "*",
        "tag": "*"
    },
    "system_global": true,
    "tag_versions_newer": 10,
    "transition": "archive",
    "exclude": {
        "expiration_days": -1,
        "selector": {
            "registry": "docker.io",
            "repository": "alpine",
            "tag": "latest"
        }
    },
    "max_images_per_account": 1000
}
  • selector: a json object defining a set of filters on registry, repository, and tag that this rule will apply to.
    • Each entry supports wildcards. e.g. {"registry": "*", "repository": "library/*", "tag": "latest"}
  • tag_versions_newer: the minimum number of tag->digest mappings with newer timestamps that must be preset for this rule to match an image tag.
  • analysis_age_days: the minimum age of the analysis to match, as indicated by the ‘analyzed_at’ timestamp on the image record.
  • transition: the operation to perform, one of the following
    • archive: works on the working set and transitions to archive, while deleting the source analysis upon successful archive creation. Specifically: the analysis will “move” to the archive and no longer be in the working set.
    • delete: works on the archive set and deletes the archived record on a match
  • exclude: a json object defining a set of filters on registry, repository, and tag, that will exclude a subset of image(s) from the selector defined above.
    • expiration_days: This allows the exclusion filter to expire. When set to -1, the exclusion filter does not expire
  • max_images_per_account: This setting may only be applied on a single “system_global” rule, and controls the maximum number of images allows in the anchore deployment (that are not archived). If this number is exceeded, anchore will transition (according to the transition field value) the oldest images exceeding this maximum count.

Rule conflicts and application:

For an image to be transitioned by a rule it must:

  • Match at least 1 rule for each of its tag entries (either in working set if transition is archive or those in the archive set, if a delete transition)
  • All rule matches must be of the same scope, global and account rules cannot interact

Put another way, if any tag record for an image analysis is not defined to be transitioned, then the analysis record is not transitioned.

Usage

Image analysis can be archived explicitly via the API (and CLI) as well as restored. Alternatively, the API and CLI can manage the rules that control automatic transitions. For more information see the following:

Archiving an Image Analysis

See: Archiving an Image

Restoring an Image Analysis

See: Restoring an Image

Managing Archive Rules

See: Working with Archive Rules

3.2 - Policy

Once an image has been analyzed and its content has been discovered, categorized, and processed, the results can be evaluated against a user-defined set of checks to give a final pass/fail recommendation for an image. Anchore Enterprise policies are how users describe which checks to perform on what images and how the results should be interpreted.

A policy is made up from a set of rules that are used to perform an evaluation a container image. The rules can define checks against an image for things such as:

  • security vulnerabilities
  • package allowlists and denylists
  • configuration file contents
  • presence of credentials in image
  • image manifest changes
  • exposed ports

These checks are defined as Gates that contain Triggers that perform specific checks and emit match results and these define the things that the system can automatically evaluate and return a decision about.

For a full listing of gate, triggers, and their parameters see: Anchore Policy Checks

These policies can be applied globally or customized for specific images or categories of applications.

A policy evaluation can return one of two results:

PASSED indicating that image complies with your policy alt text

FAILED indicating that the image is out of compliance with your policy.

alt text

Next Steps

Read more on Policies

3.2.1 - Policies and Evaluation

Introduction

Policies are the unit of policy definition and evaluation in Anchore Enterprise. A user may have multiple policies, but for a policy evaluation, the user must specify a policy to be evaluated or default to the policy currently marked ‘active’. See Working with Policies for more detail on manipulating and configuring policies using the system CLI.

Components of a Policy

A policy is a single JSON document, composed of several parts:

  • Policies - The named sets of rules and actions.
  • Allowlists - Named sets of rule exclusions to override a match in a policy rule.
  • Mappings - Ordered rules that determine which policies and allowlists should be applied to a specific image at evaluation time.
  • Allowlisted Images - Overrides for specific images to statically set the final result to a pass regardless of the policy evaluation result.
  • Blocklisted Images - Overrides for specific images to statically set the final result to a fail regardless of the policy evaluation result.

Example JSON for an empty policy, showing the sections and top-level elements:

{
  "id": "default0",
  "version": "2",
  "name": "My Default policy",
  "comment": "My system's default policy",
  "allowlisted_images": [],
  "denylisted_images": [],
  "mappings": [],
  "allowlists": [],
  "rule_sets": []
}

Policies

A policy contains zero or more rule sets. The rule sets in a policy define the checks to make against an image and the actions to recommend if the checks find a match.

Example of a single rule set JSON object, one entry in the rule_set array of the larger policy document:

{
  "name": "DefaultPolicy", 
  "version": "2",
  "comment": "Policy for basic checks", 
  "id": "ba6daa06-da3b-46d3-9e22-f01f07b0489a", 
  "rules": [
    {
      "action": "STOP", 
      "gate": "vulnerabilities", 
      "id": "80569900-d6b3-4391-b2a0-bf34cf6d813d", 
      "params": [
        { "name": "package_type", "value": "all" }, 
        { "name": "severity_comparison", "value": ">=" }, 
        { "name": "severity", "value": "medium" }
      ], 
      "trigger": "package"
    }
  ]
}

The above example defines a stop action to be produced for all package vulnerabilities found in an image that are severity medium or higher.

For information on how Rule Sets work and are evaluated, see: Rule Sets

Allowlists

An allowlist is a set of exclusion rules for trigger matches found during policy evaluation. An allowlist defines a specific gate and trigger_id (part of the output of a policy rule evaluation) that should have it’s action recommendation statically set to go. When a policy rule result is allowlisted, it is still present in the output of the policy evaluation, but it’s action is set to go and it is indicated that there was an allowlist match.

Allowlists are useful for things like:

  • Ignoring CVE matches that are known to be false-positives
  • Ignoring CVE matches on specific packages (perhaps if they are known to be custom patched)

Example of a simple allowlist as a JSON object from a policy:

{
  "id": "allowlist1",
  "name": "Simple Allowlist",
  "version": "2",
  "items": [
    { "id": "item1", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-10000+libssl" },
    { "id": "item2", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-10001+*" }
  ]
}

For more information, see: Allowlists

Mappings

Mappings are named rules that define which rule sets and allowlists to evaluate for a given image. The list of mappings is evaluated in order, so the ordering of the list matters because the first rule that matches an input image will be used and all others ignored.

Example of a simple mapping rule set:

[
  { 
    "name": "DockerHub",
    "registry": "docker.io",
    "repository": "library/postgres",
    "image": { "type": "tag", "value": "latest" },
    "rule_set_ids": [ "policy1", "policy2" ],
    "allowlist_ids": [ "allowlist1", "allowlist2" ]
  },
  {
    "name": "default", 
    "registry": "*",
    "repository": "*",
    "image": { "type": "tag", "value": "*" },
    "rule_set_ids": [ "policy1" ],
    "allowlist_ids": [ "allowlist1" ]
  }
]

For more information about mappings see: Mappings

Allowlisted Images

Allowlisted images are images, defined by registry, repository, and tag/digest/imageId, that will always result in a pass status for policy evaluation unless the image is also matched in the denylisted images section.

Example image allowlist section:

{ 
  "name": "AllowlistDebianStable",
  "registry": "docker.io",
  "repository": "library/debian",
  "image": { "type": "tag", "value": "stable" }
}

Denylisted Images

Denylisted images are images, defined by registry, repository, and tag/digest/imageId, that will always result in a policy policy evaluation status of fail. It is important to note that denylisting an image does not short-circuit the mapping evaluation or policy evaluations, so the full set of trigger matches will still be visible in the policy evaluation result.

Denylisted image matches override any allowlisted image matches (e.g. a tag matches a rule in both lists will always be blocklisted/fail).

Example image denylist section:

{ 
  "name": "BlAocklistDebianUnstable",
  "registry": "docker.io",
  "repository": "library/debian",
  "image": { "type": "tag", "value": "unstable" }
}

A complete policy example with all sections containing data:

{
  "id": "default0",
  "version": "2",
  "name": "My Default policy",
  "comment": "My system's default policy",
  "allowlisted_images": [
    {
      "name": "AllowlistDebianStable",
      "registry": "docker.io",
      "repository": "library/debian",
      "image": { "type": "tag", "value": "stable" }
    }
  ],
  "denylisted_images": [
    {
      "name": "DenylistDebianUnstable",
      "registry": "docker.io",
      "repository": "library/debian",
      "image": { "type": "tag", "value": "unstable" }
    }
  ],
  "mappings": [
    {
      "name": "DockerHub", 
      "registry": "docker.io",
      "repository": "library/postgres",
      "image": { "type": "tag", "value": "latest" },
      "rule_set_ids": [ "policy1", "policy2" ],
      "allowlist_ids": [ "allowlist1", "allowlist2" ]
    },
    {
      "name": "default", 
      "registry": "*",
      "repository": "*",
      "image": { "type": "tag", "value": "*" },
      "rule_set_ids": [ "policy1" ],
      "allowlist_ids": [ "allowlist1" ]
    }
  ],
  "allowlists": [
    {
      "id": "allowlist1",
      "name": "Simple Allowlist",
      "version": "2",
      "items": [
        { "id": "item1", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-10000+libssl" },
        { "id": "item2", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-10001+*" }
      ]
    },
    {
      "id": "allowlist2",
      "name": "Simple Allowlist",
      "version": "2",
      "items": [
        { "id": "item1", "gate": "vulnerabilities", "trigger": "package", "trigger_id": "CVE-1111+*" }
      ]
    }
  ],
  "rule_sets": [
    {
      "name": "DefaultPolicy",
      "version": "2",
      "comment": "Policy for basic checks",
      "id": "policy1",
      "rules": [
        {
          "action": "STOP",
          "gate": "vulnerabilities",
          "trigger": "package",
          "id": "rule1",
          "params": [
            { "name": "package_type", "value": "all" },
            { "name": "severity_comparison", "value": ">=" },
            { "name": "severity", "value": "medium" }
          ]
        }
      ]
    },
    {
      "name": "DBPolicy",
      "version": "1_0",
      "comment": "Policy for basic checks on a db",
      "id": "policy2",
      "rules": [
        {
          "action": "STOP",
          "gate": "vulnerabilities",
          "trigger": "package",
          "id": "rule1",
          "params": [
            { "name": "package_type", "value": "all" },
            { "name": "severity_comparison", "value": ">=" },
            { "name": "severity", "value": "low" }
          ]
        }
      ]
    }
  ]
}

Policy Evaluation

A policy evaluation results in a status of: pass or fail and that result based on the evaluation:

  1. The mapping section to determine which policies and allowlists to select for evaluation against the given image and tag
  2. The output of the policies’ triggers and applied allowlists.
  3. Denylisted images section
  4. Allowlisted images section

A pass status means the image evaluated against the policy and only go or warn actions resulted from the policy evaluation and allowlisted evaluations, or the image was allowlisted. A fail status means the image evaluated against the policy and at least one stop action resulted from the policy evaluation and allowlist evaluation, or the image was denylisted.

The flow chart for policy evaluation:

alt text

Next Steps

Read more about the Policies component of a policy.

3.2.2 - Rule Sets

Overview

A rule set is a named set of rules, represented as a JSON object within a Policy. A rule set is made up of rules that define a specific check to perform and a resulting action.

A Rule Set is made up of:

  • ID: a unique id for the rule set within the policy
  • Name: a human readable name to give the policy (may contain spaces etc)
  • A list of rules to define what to evaluate and the action to recommend on any matches for the rule

A simple example of a rule_set JSON object (found within a larger policy object):

{
  "name": "DefaultPolicy",
  "version": "2",
  "comment": "Policy for basic checks",
  "id": "policy1",
  "rules": [
      {
        "action": "STOP",
        "gate": "vulnerabilities",
        "id": "rule1",
        "params": [
          { "name": "package_type", "value": "all" },
          { "name": "severity_comparison", "value": ">=" },
          { "name": "severity", "value": "medium" }
        ],
        "trigger": "package",
        "recommendation": "Upgrade the package",
      }
  ]
}

The above example defines a stop action to be produced for all package vulnerabilities found in an image that are severity medium or higher.

Policy evaluation is the execution of all defined triggers in the rule set against the image analysis result and feed data and results in a set of output trigger matches, each of which contains the defined action from the rule definition. The final recommendation value for the policy evaluation is called the final action, and is computed from the set of output matches: stop, go, or warn.

alt text

Policy Rules

Rules define the behavior of the policy at evaluation time. Each rule defines:

  • Gate - Example: dockerfile
  • Trigger - Example: exposed_ports
  • Parameters - Parameters specific to the gate/trigger to customize its match behavior
  • Action - The action to emit if a trigger evaluation finds a match. One of stop, go, warn. The only semantics of these values are in the aggregation behavior for the policy result.

Gates

A “Gate” is a logical grouping of trigger definitions and provides a broader context for the execution of triggers against image analysis data. You can think of gates as the “things to be checked”, while the triggers provide the “which check to run” context. Gates do not have parameters themselves, but namespace the set of triggers to ensure there are no name conflicts.

Examples of gates:

  • vulnerabilities
  • packages
  • npms
  • files

For a complete listing see: Anchore Policy Checks

Triggers

Triggers define a specific condition to check within the context of a gate, optionally with one or more input parameters. A trigger is logically a piece of code that executes with the image analysis content and feed data as inputs and performs a specific check. A trigger emits matches for each instance of the condition for which it checks in the image. Thus, a single gate/trigger policy rule may result in many matches in final policy result, often with different match specifics (e.g. package names, cves, or filenames…).

Trigger parameters are passed as name, value pairs in the rule JSON:

{
  "action": "WARN",
  "parameters": [
    {  "name": "param1", "value": "value1" },
    {  "name": "param2", "value": "value2" },
    {  "name": "paramN", "value": "valueN" }
  ],
  "gate": "vulnerabilities",
  "trigger": "packages",
}

For a complete listing of gates, triggers, and the parameters, see: Anchore Policy Checks

Policy Evaluation

  • All rules in a selected rule_set are evaluated, no short-circuits
  • Rules who’s triggers and parameters find a match in the image analysis data, will “fire” resulting in a record of the match and parameters. A trigger may fire many times during an evaluation (e.g. many cves found).
  • Each firing of a trigger generates a trigger_id for that match
  • Rules may be executed in any order, and are executed in isolation (e.g. conflicting rules are allowed, it’s up to the user to ensure that policies make sense)

A policy evaluation will always contain information about the policy and image that was evaluated as well as the Final Action. The evaluation can optionally include additional detail about the specific findings from each rule in the evaluated rule_set as well as suggested remediation steps.

Policy Evaluation Findings

When extra detail is requested as part of the policy evaluation, the following data is provided for each finding produced by the rules in the evaluated rule_set.

  • trigger_id - An ID for the specific rule match that can be used to allowlist a finding
  • Gate - The name of the gate that generated this finding
  • Trigger - The name of the trigger within the Gate that generated this finding
  • message - A human readable description of the finding
  • action - One of go, warn, stop based on the action defined in the rule that generated this finding
  • policy_id - The ID for the rule_set that this rule is a part of
  • recommendation - An optional recommendation provided as part of the rule that generated this finding
  • rule_id - The ID of the rule that generated this finding
  • allowlisted - Indicates if this match was present in the applied allowlist
  • allowlist_match - Only provided if allowlisted is true, contains a JSON object with details about a allowlist match (allowlist id, name and allowlist rule id)
  • inherited_from_base - An optional field that indicates if this policy finding was present in a provided comparison image

Excerpt from a policy evaluation, showing just the policy evaluation output:

...json
"findings": [
  {
    "trigger_id": "CVE-2008-3134+imagemagick-6.q16",
    "gate": "package",
    "trigger": "vulnerabilities",
    "message": "MEDIUM Vulnerability found in os package type (dpkg) - imagemagick-6.q16 (CVE-2008-3134 - https://security-tracker.debian.org/tracker/CVE-2008-3134)",
    "action": "go",
    "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
    "recommendation": "Upgrade the package",
    "rule_id": "rule1",
    "allowlisted": false,
    "allowlist_match": null,
    "inherited_from_base": false
  },
  {
    "trigger_id": "CVE-2008-3134+libmagickwand-6.q16-2",
    "gate": "package",
    "trigger": "vulnerabilities",
    "message": "MEDIUM Vulnerability found in os package type (dpkg) - libmagickwand-6.q16-2 (CVE-2008-3134 - https://security-tracker.debian.org/tracker/CVE-2008-3134)",
    "action": "go",
    "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
    "recommendation": "Upgrade the package",
    "rule_id": "rule1",
    "allowlisted": false,
    "allowlist_match": null,
    "inherited_from_base": false
  }
]

Final Action

The final action of a policy evaluation is the policy’s recommendation based on the aggregation of all trigger evaluations defined in the policy and the resulting matches emitted.

The final action of a policy evaluation will be:

  • stop - If there are any triggers that match with this action, the policy evaluation will result in an overall stop.
  • warn - If there are any triggers that match with this action, and no triggers that match with stop, then the policy evaluation will result in warn.
  • go - If there are no triggers that match with either stop or warn, then the policy evaluation is result is a go. go actions have no impact on the evaluation result, but are useful for recording the results of specific checks on an image in the audit trail of policy evaluations over time

The policy findings are one part of the broader policy evaluation which includes things like image allowlists and denylists and makes a final policy evaluation status determination based on the combination of several component executions. See policies for more information on that process.

Next Steps

Read more about the Mappings component of a policy.

3.2.3 - Policy Mappings

Mappings in the policy are a set of rules, evaluated in order, that describe matches on an image, id, digest, or tag and the corresponding sets of policies and allowlists to apply to any image that matches the rule’s criteria.

Policies can contain one or more mapping rules that are used to determine which rule_sets and allowlists apply to a given image. They match images on the registry and repository, and finally be one of id, digest, or tag.

A mapping has:

  • Registry - The registry url to match, including wildcards (e.g. ‘docker.io’, ‘quay.io’, ‘gcr.io’, ‘*’)
  • Repository - The repository name to match, including wildcards (e.g. ’library/nginx’, ‘mydockerhubusername/myrepositoryname’, ’library/*’, ‘*’)
  • Image - The way to select an image that matches the registry and repository filters
    • type: how to reference the image and the expected format of the ‘value’ property
      • “tag” - just the tag name itself (the part after the ‘:’ in a docker pull string: e.g. nginx:latest -> ’latest’ is the tag name)
      • “id” - the image id
      • “digest” - the image digest (e.g. sha256@abc123)
    • value: the value to match against, including wildcards

Note: Unlike other parts of the policy, Mappings are evaluated in order and will halt on the first matching rule. This is important to understand when combined with wildcard matches since it enables sophisticated matching behavior.

Examples

Example 1, all images match a single catch-all rule:

[
  {
    "registry": "*",
    "repository": "*",
    "image": { "type": "tag", "value": "*"},
    "rule_set_ids": ["defaultpolicy"],
    "allowlist_ids": ["defaultallowlist"]
  }
]

Example 2, all “official” images from DockerHub are evaluated against officialspolicy and officialsallowlist (made up names for this example), while all others from DockerHub will be evaluated against defaultpolicy and defaultallowlist , and private GCR images will be evaluated against gcrpolicy and gcrallowlist:

[
  {
    "registry": "docker.io",
    "repository": "library/*",
    "image": { "type": "tag", "value": "*"},
    "rule_set_ids": [ "officialspolicy"],
    "allowlist_ids": [ "officialsallowlist"]
  },
  {
    "registry": "gcr.io",
    "repository": "*",
    "image": { "type": "tag", "value": "*"},
    "rule_set_ids": [ "gcrpolicy"],
    "allowlist_ids": [ "gcrallowlist"]
  },
  {
    "registry": "*",
    "repository": "*",
    "image": { "type": "tag", "value": "*"},
    "rule_set_ids": [ "defaultpolicy"],
    "allowlist_ids": [ "defaultallowlist"]
  }
]

Example 3, all images from a unknown registry will be evaluated against defaultpolicy and defaultallowlist, and an internal registry’s images will be evaluated against a different set (internalpolicy and internalallowlist):

[
  {
    "registry": "myregistry.mydomain.com:5000",
    "repository": "*",
    "image": { "type": "tag", "value": "*"},
    "policy_ids": [ "internalpolicy"],
    "allowlist_ids": [ "internalallowlist"]
  },
  {
    "registry": "*",
    "repository": "*",
    "image": { "type": "tag", "value": "*"},
    "policy_ids": [ "defaultpolicy"],
    "allowlist_ids": [ "defaultallowlist"]
  }
]

Using Multiple Policies and Allowlists

The result of the evaluation of the mapping section of a policy is the list of rule sets and allowlists that will be used for actually evaluating the image. Because multiple rule sets and allowlists can be specified in each mapping rule, you can use granular rule sets and allowlists and then combined them in the mapping rules.

Examples of schemes to use for how to split-up policies include:

  • Different policies for different types of checks such that each policy only uses one or two gates (e.g. vulnerabilities, packages, dockerfile)
  • Different policies for web servers, another for database servers, another for logging infrastructure, etc.
  • Different policies for different parts of the stack: os-packages vs. application packages

Next Steps

Read more about the allowlists component of a policy.

3.2.4 - Allowlists

Allowlists provide a mechanism within a policy to explicitly override a policy-rule match. An allowlist is a named set of exclusion rules that match trigger outputs.

Example allowlist:

{
  "id": "allowlist1",
  "name": "My First Allowlist",
  "comment": "A allowlist for my first try",
  "version": "2",
  "items": [
    {
      "gate": "vulnerabilities",
      "trigger_id": "CVE-2018-0737+*",
      "id": "rule1",
      "expires_on": "2019-12-30T12:00:00Z"
    }
  ]
}

The components:

  • Gate: The gate to allowlist matches from (ensures trigger_ids are not matched in the wrong context)
  • Trigger Id: The specific trigger result to match and allowlist. This id is gate/trigger specific as each trigger may have its own trigger_id format. We’ll use the most common for this example: the CVE trigger ids produced by the vulnerability->package gate-trigger. The trigger_id specified may include wildcards for partial matches.
  • id: an identifier for the rule, must only be unique within the allowlist object itself
  • Expires On: (optional) specifies when a particular allowlist item expires. This is an RFC3339 date-time string. If the rule matches, but is expired, the policy engine will NOT allowlist according to that match.

The allowlist is processed if it is specified in the mapping rule that was matched during policy evaluation and is applied to the results of the policy evaluation defined in that same mapping rule. If a allowlist item matches a specific policy trigger output, then the action for that output is set to go and the policy evaluation result notes that the trigger output was matched for a allowlist item by associating it with the allowlist id and item id of the match.

An example of a allowlisted match from a snippet of a policy evaluation result (See Policies for more information on the format of the policy evaluation result). This a single row entry from the result:

[
  {
    "trigger_id": "CVE-2018-0737+openssl",
    "gate": "package",
    "trigger": "vulnerabilities",
    "message": "MEDIUM Vulnerability found in os package type (dpkg) - openssl (fixed in: 1.0.1t-1+deb8u9) - (CVE-2018-0737 - https://security-tracker.debian.org/tracker/CVE-2018-0737)",
    "action": "go",
    "policy_id": "myfirstpolicy",
    "recommendation": "Upgrade the package",
    "rule_id": "rule1",
    "allowlisted": true,
    "allowlist_match": {
      "matched_rule_id": "rule1",
      "allowlist_id": "allowlist1",
      "allowlist_name": "My First Allowlist"
    },
    "inherited_from_base": false
  },
]

Note: Allowlist are evaluated only as far as necessary. Once a policy rule match has been allowlisted by one allowlist item, it will not be checked again for allowlist matches. But, allowlist items may be evaluated out-of-order for performance optimization, so if multiple allowlist items match the same finding any one of them may be the item that is actually matched against a given trigger_id.

Next Steps

Read more about the Anchore Policy Checks for a complete list of gates and triggers.

3.2.5 - Anchore Policy Checks

Introduction

In this document, we describe the current anchore gates (and related triggers/parameters) that are supported within anchore policy.

Gate: dockerfile

Checks against the content of a dockerfile if provided, or a guessed dockerfile based on docker layer history if the dockerfile is not provided.

Trigger NameDescriptionParameterDescriptionExample
instructionTriggers if any directives in the list are found to match the described condition in the dockerfile.instructionThe Dockerfile instruction to check.from
instructionTriggers if any directives in the list are found to match the described condition in the dockerfile.checkThe type of check to perform.=
instructionTriggers if any directives in the list are found to match the described condition in the dockerfile.valueThe value to check the dockerfile instruction against.scratch
instructionTriggers if any directives in the list are found to match the described condition in the dockerfile.actual_dockerfile_onlyOnly evaluate against a user-provided dockerfile, skip evaluation on inferred/guessed dockerfiles. Default is False.true
effective_userChecks if the effective user matches the provided user names, either as a allowlist or blocklist depending on the type parameter setting.usersUser names to check against as the effective user (last user entry) in the images history.root,docker
effective_userChecks if the effective user matches the provided user names, either as a allowlist or blocklist depending on the type parameter setting.typeHow to treat the provided user names.denylist
exposed_portsEvaluates the set of ports exposed. Allows configuring allowlist or blocklist behavior. If type=allowlist, then any ports found exposed that are not in the list will cause the trigger to fire. If type=denylist, then any ports exposed that are in the list will cause the trigger to fire.portsList of port numbers.80,8080,8088
exposed_portsEvaluates the set of ports exposed. Allows configuring allowlist or blocklist behavior. If type=allowlist, then any ports found exposed that are not in the list will cause the trigger to fire. If type=denylist, then any ports exposed that are in the list will cause the trigger to fire.typeWhether to use port list as a allowlist or denylist.denylist
exposed_portsEvaluates the set of ports exposed. Allows configuring allowlist or blocklist behavior. If type=allowlist, then any ports found exposed that are not in the list will cause the trigger to fire. If type=denylist, then any ports exposed that are in the list will cause the trigger to fire.actual_dockerfile_onlyOnly evaluate against a user-provided dockerfile, skip evaluation on inferred/guessed dockerfiles. Default is False.true
no_dockerfile_providedTriggers if anchore analysis was performed without supplying the actual image Dockerfile.

Gate: files

Checks against files in the analyzed image including file content, file names, and filesystem attributes.

Trigger NameDescriptionParameterDescriptionExample
content_regex_matchTriggers for each file where the content search analyzer has found a match using configured regexes in the analyzer_config.yaml “content_search” section. If the parameter is set, the trigger will only fire for files that matched the named regex. Refer to your analyzer_config.yaml for the regex values.regex_nameRegex string that also appears in the FILECHECK_CONTENTMATCH analyzer parameter in analyzer configuration, to limit the check to. If set, will only fire trigger when the specific named regex was found in a file..password.
name_matchTriggers if a file exists in the container that has a filename that matches the provided regex. This does have a performance impact on policy evaluation.regexRegex to apply to file names for match..*.pem
attribute_matchTriggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.filenameFilename to check against provided checksum./etc/passwd
attribute_matchTriggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.checksum_algorithmChecksum algorithmsha256
attribute_matchTriggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.checksumChecksum of file.832cd0f75b227d13aac82b1f70b7f90191a4186c151f9db50851d209c45ede11
attribute_matchTriggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.checksum_matchChecksum operation to perform.equals
attribute_matchTriggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.modeFile mode of file.00644
attribute_matchTriggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.mode_opFile mode operation to perform.equals
attribute_matchTriggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.skip_missingIf set to true, do not fire this trigger if the file is not present. If set to false, fire this trigger ignoring the other parameter settings.true
suid_or_guid_setFires for each file found to have suid or sgid bit set.

Gate: passwd_file

Content checks for /etc/passwd for things like usernames, group ids, shells, or full entries.

Trigger NameDescriptionParameterDescriptionExample
content_not_availableTriggers if the /etc/passwd file is not present/stored in the evaluated image.
denylist_usernamesTriggers if specified username is found in the /etc/passwd fileuser_namesList of usernames that will cause the trigger to fire if found in /etc/passwd.daemon,ftp
denylist_useridsTriggers if specified user id is found in the /etc/passwd fileuser_idsList of userids (numeric) that will cause the trigger to fire if found in /etc/passwd.0,1
denylist_groupidsTriggers if specified group id is found in the /etc/passwd filegroup_idsList of groupids (numeric) that will cause the trigger ot fire if found in /etc/passwd.999,20
denylist_shellsTriggers if specified login shell for any user is found in the /etc/passwd fileshellsList of shell commands to denylist./bin/bash,/bin/zsh
denylist_full_entryTriggers if entire specified passwd entry is found in the /etc/passwd file.entryFull entry to match in /etc/passwd.ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin

Gate: packages

Distro package checks

Trigger NameDescriptionParameterDescriptionExample
required_packageTriggers if the specified package and optionally a specific version is not found in the image.nameName of package that must be found installed in image.libssl
required_packageTriggers if the specified package and optionally a specific version is not found in the image.versionOptional version of package for exact version match.1.10.3rc3
required_packageTriggers if the specified package and optionally a specific version is not found in the image.version_match_typeThe type of comparison to use for version if a version is provided.exact
verifyCheck package integrity against package db in the image. Triggers for changes or removal or content in all or the selected “dirs” parameter if provided, and can filter type of check with the “check_only” parameter.only_packagesList of package names to limit verification.libssl,openssl
verifyCheck package integrity against package db in the image. Triggers for changes or removal or content in all or the selected “dirs” parameter if provided, and can filter type of check with the “check_only” parameter.only_directoriesList of directories to limit checks so as to avoid checks on all dir./usr,/var/lib
verifyCheck package integrity against package db in the image. Triggers for changes or removal or content in all or the selected “dirs” parameter if provided, and can filter type of check with the “check_only” parameter.checkCheck to perform instead of all.changed
denylistTriggers if the evaluated image has a package installed that matches the named package optionally with a specific version as well.namePackage name to denylist.openssh-server
denylistTriggers if the evaluated image has a package installed that matches the named package optionally with a specific version as well.versionSpecific version of package to denylist.1.0.1

Gate: vulnerabilities

CVE/Vulnerability checks.

Trigger NameDescriptionParameterDescriptionExample
packageTriggers if a found vulnerability in an image meets the comparison criteria.package_typeOnly trigger for specific package type.all
packageTriggers if a found vulnerability in an image meets the comparison criteria.severity_comparisonThe type of comparison to perform for severity evaluation.>
packageTriggers if a found vulnerability in an image meets the comparison criteria.severitySeverity to compare against.high
packageTriggers if a found vulnerability in an image meets the comparison criteria.cvss_v3_base_score_comparisonThe type of comparison to perform for CVSS v3 base score evaluation.>
packageTriggers if a found vulnerability in an image meets the comparison criteria.cvss_v3_base_scoreCVSS v3 base score to compare against.None
packageTriggers if a found vulnerability in an image meets the comparison criteria.cvss_v3_exploitability_score_comparisonThe type of comparison to perform for CVSS v3 exploitability sub score evaluation.>
packageTriggers if a found vulnerability in an image meets the comparison criteria.cvss_v3_exploitability_scoreCVSS v3 exploitability sub score to compare against.None
packageTriggers if a found vulnerability in an image meets the comparison criteria.cvss_v3_impact_score_comparisonThe type of comparison to perform for CVSS v3 impact sub score evaluation.>
packageTriggers if a found vulnerability in an image meets the comparison criteria.cvss_v3_impact_scoreCVSS v3 impact sub score to compare against.None
packageTriggers if a found vulnerability in an image meets the comparison criteria.fix_availableIf present, the fix availability for the vulnerability record must match the value of this parameter.true
packageTriggers if a found vulnerability in an image meets the comparison criteria.vendor_onlyIf True, an available fix for this CVE must not be explicitly marked as wont be addressed by the vendortrue
packageTriggers if a found vulnerability in an image meets the comparison criteria.max_days_since_creationIf provided, this CVE must be older than the days provided to trigger.7
packageTriggers if a found vulnerability in an image meets the comparison criteria.max_days_since_fixIf provided (only evaluated when fix_available option is also set to true), the fix first observed time must be older than days provided, to trigger.30
packageTriggers if a found vulnerability in an image meets the comparison criteria.vendor_cvss_v3_base_score_comparisonThe type of comparison to perform for vendor specified CVSS v3 base score evaluation.>
packageTriggers if a found vulnerability in an image meets the comparison criteria.vendor_cvss_v3_base_scoreVendor CVSS v3 base score to compare against.None
packageTriggers if a found vulnerability in an image meets the comparison criteria.vendor_cvss_v3_exploitability_score_comparisonThe type of comparison to perform for vendor specified CVSS v3 exploitability sub score evaluation.>
packageTriggers if a found vulnerability in an image meets the comparison criteria.vendor_cvss_v3_exploitability_scoreVendor CVSS v3 exploitability sub score to compare against.None
packageTriggers if a found vulnerability in an image meets the comparison criteria.vendor_cvss_v3_impact_score_comparisonThe type of comparison to perform for vendor specified CVSS v3 impact sub score evaluation.>
packageTriggers if a found vulnerability in an image meets the comparison criteria.vendor_cvss_v3_impact_scoreVendor CVSS v3 impact sub score to compare against.None
packageTriggers if a found vulnerability in an image meets the comparison criteria.package_path_excludeThe regex to evaluate against the package path to exclude vulnerabilities.test.jar
packageTriggers if a found vulnerability in an image meets the comparison criteria.inherited_from_baseIf true, only show vulns inherited from the base, if false than only show vulns not inherited from the base. Don’t specify to include vulns from the base image and the current image.True
denylistTriggers if any of a list of specified vulnerabilities has been detected in the image.vulnerability_idsList of vulnerability IDs, will cause the trigger to fire if any are detected.CVE-2019-1234
denylistTriggers if any of a list of specified vulnerabilities has been detected in the image.vendor_onlyIf set to True, discard matches against this vulnerability if vendor has marked as will not fix in the vulnerability record.True
stale_feed_dataTriggers if the CVE data is older than the window specified by the parameter MAXAGE (unit is number of days).max_days_since_syncFire the trigger if the last sync was more than this number of days ago.10
vulnerability_data_unavailableTriggers if vulnerability data is unavailable for the image’s distro packages such as rpms or dpkg. Non-OS packages like npms and java are not considered in this evaluation

Gate: licenses

License checks against found software licenses in the container image

Trigger NameDescriptionParameterDescriptionExample
denylist_exact_matchTriggers if the evaluated image has a package installed with software distributed under the specified (exact match) license(s).licensesList of license names to denylist exactly.GPLv2+,GPL-3+,BSD-2-clause
denylist_exact_matchTriggers if the evaluated image has a package installed with software distributed under the specified (exact match) license(s).package_typeOnly trigger for specific package type.all
denylist_partial_matchtriggers if the evaluated image has a package installed with software distributed under the specified (substring match) license(s)licensesList of strings to do substring match for denylist.LGPL,BSD
denylist_partial_matchtriggers if the evaluated image has a package installed with software distributed under the specified (substring match) license(s)package_typeOnly trigger for specific package type.all

Gate: ruby_gems

Ruby Gem Checks

Trigger NameDescriptionParameterDescriptionExample
newer_version_found_in_feedTriggers if an installed GEM is not the latest version according to GEM data feed.
not_found_in_feedTriggers if an installed GEM is not in the official GEM database, according to GEM data feed.
version_not_found_in_feedTriggers if an installed GEM version is not listed in the official GEM feed as a valid version.
denylistTriggers if the evaluated image has a GEM package installed that matches the specified name and version.nameGem name to denylist.time_diff
denylistTriggers if the evaluated image has a GEM package installed that matches the specified name and version.versionOptional version to denylist specifically.0.2.9
feed_data_unavailableTriggers if anchore does not have access to the GEM data feed.

Gate: npms

NPM Checks

Trigger NameDescriptionParameterDescriptionExample
newer_version_in_feedTriggers if an installed NPM is not the latest version according to NPM data feed.
unknown_in_feedsTriggers if an installed NPM is not in the official NPM database, according to NPM data feed.
version_not_in_feedsTriggers if an installed NPM version is not listed in the official NPM feed as a valid version.
denylisted_name_versionTriggers if the evaluated image has an NPM package installed that matches the name and optionally a version specified in the parameters.nameNpm package name to denylist.time_diff
denylisted_name_versionTriggers if the evaluated image has an NPM package installed that matches the name and optionally a version specified in the parameters.versionNpm package version to denylist specifically.0.2.9
feed_data_unavailableTriggers if the system does not have access to the NPM data feed.

Gate: secret_scans

Checks for secrets and content found in the image using configured regexes found in the “secret_search” section of analyzer_config.yaml.

Trigger NameDescriptionParameterDescriptionExample
content_regex_checksTriggers if the secret content search analyzer has found any matches with the configured and named regexes. Checks can be configured to trigger if a match is found or is not found (selected using match_type parameter). Matches are filtered by the content_regex_name and filename_regex if they are set. The content_regex_name shoud be a value from the “secret_search” section of the analyzer_config.yaml.content_regex_nameName of content regexps configured in the analyzer that match if found in the image, instead of matching all. Names available by default are: [‘AWS_ACCESS_KEY’, ‘AWS_SECRET_KEY’, ‘PRIV_KEY’, ‘DOCKER_AUTH’, ‘API_KEY’].AWS_ACCESS_KEY
content_regex_checksTriggers if the secret content search analyzer has found any matches with the configured and named regexes. Checks can be configured to trigger if a match is found or is not found (selected using match_type parameter). Matches are filtered by the content_regex_name and filename_regex if they are set. The content_regex_name shoud be a value from the “secret_search” section of the analyzer_config.yaml.filename_regexRegexp to filter the content matched files by./etc/.*
content_regex_checksTriggers if the secret content search analyzer has found any matches with the configured and named regexes. Checks can be configured to trigger if a match is found or is not found (selected using match_type parameter). Matches are filtered by the content_regex_name and filename_regex if they are set. The content_regex_name shoud be a value from the “secret_search” section of the analyzer_config.yaml.match_typeSet to define the type of match - trigger if match is found (default) or not found.found

Gate: metadata

Checks against image metadata, such as size, OS, distro, architecture, etc.

Trigger NameDescriptionParameterDescriptionExample
attributeTriggers if a named image metadata value matches the given condition.attributeAttribute name to be checked.size
attributeTriggers if a named image metadata value matches the given condition.checkThe operation to perform the evaluation.>
attributeTriggers if a named image metadata value matches the given condition.valueValue used in comparison.1073741824

Gate: always

Triggers that fire unconditionally if present in policy, useful for things like testing and deny-listing.

Trigger NameDescriptionParameterDescriptionExample
alwaysFires if present in a policy being evaluated. Useful for things like deny-listing images or testing mappings and allowlists by using this trigger in combination with policy mapping rules.

Gate: retrieved_files

Checks against content and/or presence of files retrieved at analysis time from an image

Trigger NameDescriptionParameterDescriptionExample
content_not_availableTriggers if the specified file is not present/stored in the evaluated image.pathThe path of the file to verify has been retrieved during analysis/etc/httpd.conf
content_regexEvaluation of regex on retrieved file contentpathThe path of the file to verify has been retrieved during analysis/etc/httpd.conf
content_regexEvaluation of regex on retrieved file contentcheckThe type of check to perform with the regexmatch
content_regexEvaluation of regex on retrieved file contentregexThe regex to evaluate against the content of the file.SSlEnabled.

Gate: Malware

TriggerDescriptionParameters
scansTriggers if any malware scanner has found any matches in the image.
scan_not_runTriggers if no scan was found for the image.

Gate: Tag Drift

Compares the SBOM from the evaluated image’s tag and the tag’s previous image, if found. Provides triggers to detect packages added, removed or modified.

Trigger NameDescriptionParameterDescriptionExample
packages_addedChecks to see if any packages have been added.package_typePackage type to filter for only specific types. If ommitted, then all types are evaluated.apk
packages_removedChecks to see if any packages have been removed.package_typePackage type to filter for only specific types. If ommitted, then all types are evaluated.apk
packages_modifiedChecks to see if any packages have been modified.package_typePackage type to filter for only specific types. If ommitted, then all types are evaluated.apk

Gate: Image Source Drift

Triggers for evaluating diffs of an image’s source repo sbom and the built image. Operates on the diffs defined by ‘contains’ relationships where the image is the ‘source’ and the source revisions that are the ’target’ for the relationship.

Trigger NameDescriptionParameterDescriptionExample
package_downgradedChecks to see if any packages have a lower version in the built image than specified in the input source sbomspackage_typesTypes of package to filter byjava,npm
package_removedChecks to see if any packages are not installed that were expected based on the image’s related input source sbomspackage_typesTypes of package to filter byjava,npm
no_related_sourcesChecks to see if there are any source sboms related to the image. Findings indicate that the image does not have a source sbom to detect drift against

Gate: Ancestry

Checks the image ancestry against approved images.

Trigger NameDescriptionParameterDescriptionExample
allowed_base_image_digestChecks to see if base image is approvedbase_digestList of approved base image digests.sha256:123abc
allowed_base_image_tagChecks to see if base image is approvedbase_tagList of approved base image tags.docker.io/debian:latest
no_ancestors_analyzedChecks to see if the image has a known ancestor

Next Steps

Now that you have a good grasp on the core concepts and architecture, check out the Requirements section for running Anchore.

3.3 - Remediation

After Anchore analyzes images, discovers their contents, and matches vulnerabilities, it can suggest possible actions that can be taken.

These actions range from adding a Healthcheck to your Dockerfile, to upgrading a package version.

Since the solutions for resolving vulnerabilities can vary, and may require several different forms of remediation and intervention, Anchore provides the capability to plan out your course of action

Action Plans

Action plans group up the resolutions that may be taken to address the vulnerabilities or issues found in a particular image, and provide a way for you to take “action”.

Currently, we support one type of Action Plan, which can be used to notify an existing endpoint configuration of those resolutions. This is a great way to facilitate communication across teams when vulnerabilities need to be addressed.

Here’s an example JSON that describes an Action Plan for notifications:

{
    "type": "notification",
    "image_tag": "docker.io/alpine:latest",
    "image_digest": "sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a",
    "bundle_id": "anchore_default_bundle",
    "resolutions": [
        {
            "trigger_ids": ["CVE-2020-11-09-fake"],
            "content": "This is a Resolution for the CVE",
        }
    ],
    "subject": "Actions required for image: alpine:latest",
    "message": "These are some issues Anchore found in alpine:latest, and how to resolve them",
    "endpoint": "smtp",
    "configuration_id": "cda118f9ec63ddefb4a173a2b2a03"
}

Parts:

  • type: The type of action plan being submitted (currently, only notification supported)
  • image_tag: The full image tag of the image requiring action
  • image_tag: The image digest of the image requiring action
  • bundle_id: the id of the policy bundle that discovered the vulnerabilities
  • resolutions: A list composed of the remediations and corresponding trigger IDs
  • subject: The subject line for the action plan notification
  • message: The body of the message for the action plan notification
  • endpoint: The type of notification endpoint the action plan will be sent to
  • configuration_id: The uuid of the notification configuration for the above endpoint

4 - Requirements

Introduction

This section details the requirements to run Anchore Enterprise.

Database

Anchore Enterprise requires a PostgreSQL version 13 or higher database to provide persistent storage for image, policy and analysis data.

The database can be run in a container, as configured in the example Docker Compose file, or it can be provided as an external service to Anchore Enterprise. PostgreSQL compatible databases, such as Amazon RDS for PostgreSQL, can be used for highly-scalable cloud deployments.

Memory

The Anchore Enterprise container will typically operate at a steady state that uses less than 2 GB of memory. However, under load, and during large feed synchronization operations, memory usage may spike above 4GB. Therefore, for production deployments, a minimum of 8GB is recommended for each service.

Network

Anchore requires the following two categories of network access:

  • Registry Access Network connectivity, including DNS resolution, to the registries from which Anchore Enterprise needs to download images.
  • Feed Service Anchore Enterprise Feeds requires access to the upstream data feeds from supported Linux distributions and package registries. See Feeds Endpoints for the full list of the endpoints.

Security

Anchore Enterprise is deployed as source repositories or container images that can be run manually using Docker Compose, Kubernetes or any container platform that supports Docker compatible images.

By default, Anchore Enterprise does not require any special permissions. It can be run as an unprivileged container with no access to the underlying Docker host.

Note: Anchore Enterprise can be configured to pull images through the Docker Socket. However, this configuration is not recommended, as it grants the Anchore Enterprise container added privileges, and may incur a performance impact on the Docker Host.

Storage

Anchore Enterprise uses a PostgreSQL database to store persistent data for images, tags, policies, subscriptions and other artifacts. One persistent storage volume is required for configuration information, and two optional storage volumes may be provided as described below.

  • Configuration volume This volume is used to provide persistent storage to the container from which it will read its configuration files, and optionally - certificates. Requirement: Less than 1MB.
  • [Optional] Temporary storage The temporary storage volume is recommended but not required. During the analysis of images, Anchore Enterprise downloads and extracts all of the layers required for an image. These layers are extracted and analyzed, after which, the layers and extracted data are deleted. If a temporary storage is not configured, then the container’s ephemeral storage will be used to store temporary files. However, performance is likely be improved by using a dedicated volume. A temporary storage volume may also be used for image-layer caching to speed up analysis. Requirement: Three times the uncompressed image size to be analyzed. Note: A temporary volume is required to work around a kernel driver bug for container hosts that use OverlayFS or OverlayFS2 storage, with a kernel older than 4.13.
  • [Optional] Object storage Anchore Enterprise stores documents containing archives of image analysis data and policies as JSON documents. By default, these documents are stored within the PostgreSQL database. However, Anchore Enterprise can be configured to store archive documents in a filesystem (volume), S3 Object store, or Swift Object Store. Requirement: Number of images x 10MB (estimated).

Enterprise UI

The Anchore Enterprise User Interface is delivered as a Docker container that can be run on any Docker compatible runtime.

The Anchore Enterprise UI module interfaces with Anchore API using the external API endpoint. The UI requires access to the Anchore database where it creates its own namespace for persistent configuration storage. Additionaly, a Redis database is used to store session information.

  • Runtime

    • Docker compatible runtime (version 1.12 or higher)
  • Storage

    • Configuration volume This volume is used to provide persistent storage to the container from which it will read its configuration files and optionally certificates. Requirement: Less than 1MB
  • Network

    • Ingress
      • The Anchore UI module publishes a web UI service by default on port 3000, however, this port can be remapped.
    • Engress
      • The Anchore UI module requires access to two network services:
        • External API endpoint (typically port 8228)
        • Redis Database (typically port 6379)
  • Redis Service

    • Version 4 or higher

Note: If you’re installing the Anchore Enterprise UI using our installation examples, they include a deployment of a redis service as part of the UI deployment process.

Next Steps

If you feel you have a solid grasp of the requirements for deploying Anchore Enterprise, we recommend following one of our installation guides.

5 - Anchore Enterprise Feeds

Overview

Anchore Enterprise Feeds is an On-Premises service that supplies operating system and application eco-system vulnerability data and package data for consumption by the Anchore Policy Engine. The Policy Engine uses this data for finding vulnerabilities and evaluating policies. For more information about how Anchore manages feed data, see Feeds Overview.

Anchore maintains a public index of Grype databases built and published daily at https://toolbox-data.anchore.io/grype/databases/listing.json for use by all. Anchore Enterprise Feeds offers the following benefits over those pre-built grype databases:

  • Run Anchore Enterprise in Air-Gapped mode.
  • Granular control and configuration over feed data due to On-Premises installation. Configure how often the data from external sources is synced, enable/disable individual data providers responsible for processing normalized data. Access to an Anchore-curated dataset for suppressing known false positive vulnerability matches

Design

Anchore Enterprise Feeds have three high-level components:

  • Drivers – Communicate with upstream sources and fetch data and normalize it for Anchore.
  • Database – Stores the current state of the normalized data for use by Anchore.
  • API – Serves the data to clients, supporting update-only fetches.

Drivers

A driver downloads raw data from an external source and normalizes it. Each driver outputs normalized data for one of the four feed types - (os) vulnerabilities, packages, nvd or third party feeds.

  • Drivers responsible for operating system package vulnerabilities gather raw data from the respective os resources listed below.
  • Package drivers process the official list of packages maintained by NPM and RubyGems organizations.
  • The nvdv2 driver processes CVEs from the NIST database, and supplies normalized data that is used for matching non-os packages such as Java, Python, NPM, GEM, NuGet.

All drivers except for the package drivers are enabled by default. The service has configuration toggles to enable/disable each driver individually and tuning driver specific settings.

DriverFeed TypeExternal Data Source
susevulnerabilitieshttps://www.suse.com/support/security/oval/
alpinevulnerabilitieshttps://secdb.alpinelinux.org
rhelvulnerabilitieshttps://access.redhat.com/hydra/rest/securitydata/cve.json https://www.redhat.com/security/data/oval/v2
debianvulnerabilitieshttps://security-tracker.debian.org/tracker/data/json https://salsa.debian.org/security-tracker-team/security-tracker/raw/master/data/DSA/list
oraclevulnerabilitieshttps://linux.oracle.com/security/oval/com.oracle.elsa-all.xml.bz2
ubuntuvulnerabilitieshttps://launchpad.net/ubuntu-cve-tracker
amznvulnerabilitieshttps://alas.aws.amazon.com/AL2/alas.rss https://alas.aws.amazon.com/AL2022/alas.rss https://alas.aws.amazon.com/AL2023/alas.rss
gempackageshttps://s3-us-west-2.amazonaws.com/rubygems-dumps
npmpackageshttps://replicate.npmjs.com
githubgithubhttps://api.github.com/graphql
nvdnvdhttps://services.nvd.nist.gov/rest/json/cves/2.0 https://services.nvd.nist.gov/rest/json/cvehistory/2.0
msrcmicrosofthttps://api.msrc.microsoft.com/
anchore_match_exclusionsanchore:exclusionshttps://data.anchore-enterprise.com/providers/anchore/exclusions https://anchore-feed-service.s3.amazonaws.com/
wolfivulnerabilitieshttps://packages.wolfi.dev/os/security.json
chainguardvulnerabilitieshttps://packages.cgr.dev/chainguard/security.json
marinervulnerabilitieshttps://raw.githubusercontent.com/microsoft/CBL-MarinerVulnerabilityData/

The anchore_match_exclusions feed requires a specific license. Please contact Anchore Support for details.

Database

Normalized vulnerability and package data is persisted in the database. In addition, the execution state and updates to the data set are tracked in the database.

Configuration

See Feeds to read about installation requirements for an air-gapped deployment and optional configuration of drivers.