Anchore Enterprise is an software bill of materials (SBOM) - powered software supply chain management solution designed for a cloud-native world. It provides continuous visibility into supply chain security risks. Anchore Enterprise takes a developer-friendly approach that minimizes friction by embedding automation into development toolchains to generate SBOMs and accurately identify vulnerabilities, malware, misconfigurations, and secrets for faster remediation.
Start by going to the Overview of Anchore Enterprise to learn more about the basic concepts and functions..
For information about deploying and operating an Anchore Enterprise instance:
Note: Many topics have nested sub-topics in the navigation pane to the left that become visible when you click a topic.
1 - Overview of Anchore Enterprise
What is Anchore Enterprise?
Anchore Enterprise is a software bill of materials (SBOM) - powered software supply chain management solution designed for a cloud-native world. It provides continuous visibility into supply chain security risks. Anchore Enterprise takes a developer-friendly approach that minimizes friction by embedding automation into development toolchains to generate SBOMs and accurately identify vulnerabilities, malware, misconfigurations, and secrets for faster remediation.
Gaining Visibility with SBOMs
Anchore Enterprise generates detailed SBOMs at each step in the development process, providing a complete inventory of the software components including the direct and transitive dependencies you use. Anchore Enterprise stores all SBOMs in a SBOM repository to enable ongoing monitoring of your software for new or zero-day vulnerabilities that can arise even post-deployment.
Anchore Enterprise also detects SBOM drift in the build process, issuing an alert for changes in SBOMs so they can be assessed for risk, malware, compromised software, and malicious activity.
Identifying Vulnerability and Security Issues
Starting with the SBOM, Anchore Enterprise uses multiple vulnerability feeds along with a precision vulnerability matching algorithm to pinpoint relevant vulnerabilities and minimize false positives. Anchore Enterprise also identifies malware, cryptominers, secrets, misconfigurations, and other security issues.
Automating through Policies
Anchore Enterprise includes a powerful policy engine that enables you to define guardrails and automate compliance with industry standards or internal rules. Using Anchore’s customizable policies, you can automatically identify the security issues that you care about and alert developers or create policy gates for critical issues.
A software bill of materials (SBOM), is the foundational element that powers Anchore Enterprise’s secure management of the software supply chain. Anchore Enterprise automatically generates and analyzes comprehensive SBOMs at each step of the development lifecycle. SBOMS are stored in a repository to provide visibility into software components and dependencies as well as continuous monitoring for new vulnerabilities and risks throughout the development process and post-deployment.
See SBOM Generation and Management for more information.
About Anchore Enterprise SBOMs
An SBOM is a list of software components and relevant metadata that includes packages, code-snippets, licenses, configurations, and other elements of an application.
Anchore Enterprise generates high-fidelity SBOMs by scanning container images and source code repositories. Anchore’s native SBOM format includes a rich set of metadata that is a superset of data included in SBOM standards such as SPDX and CycloneDX. Using this additional level of metadata, Anchore can identify secrets, file permissions, misconfiguration, malware, insecure practices, and more.
Anchore Enterprise SBOMs identify:
Open source dependencies including ecosystem type (OS, language, and other metadata)
Nested dependencies in archive files (WAR files, JAR files and more)
Package details such as name, version, creator, and license information
Filesystem metadata such as the file name, size, permissions, creation time, modification time, and hashes
Malware
Secrets, keys, and credentials
Anchore Enterprise supported ecosystems
Anchore Enterprise supports the following packaging ecosystems when identifying SBOM content. The Operating System category captures Linux packaging ecosystems. The Binary detector will inspect content to identify binaries that were installed outside of packaging ecosystems.
Operating System
RPM
DEB
APK
Linux kernel archives (vmlinz)
Linux kernel modules (ko)
Languages
C (conan)
C++ (conan)
Dart (pubs)
Dotnet (deps.json)
Objective-C (cocoapods)
Elixir (mix)
Erlang (rebar3)
Go (go.mod, Go binaries)
Haskell (cabal, stack)
Java (jar, ear, war, par, sar, nar, native-image)
JavaScript (npm, yarn)
Jenkins Plugins (jpi, hpi)
Nix (outputs in /nix/store)
PHP (composer)
Python (wheel, egg, poetry, requirements.txt)
Ruby (gem)
Rust (cargo.lock)
Swift (cocoapods, swift-package-manager)
Binaries
Apache httpd
BusyBox
Consul
Golang
HAProxy
Helm
Java
Memcached
Nodejs
PHP
Perl
PostgreSQL
Python
Redis
Rust
Traefik
How Anchore Enterprise Uses SBOMs
Identify Vulnerabilities and Risk for Remediation
Anchore Enterprise generates detailed SBOMs at each stage of the software development lifecycle and stores them in a centralized repository to provide visibility into components and open source dependencies. These SBOMs are analyzed for vulnerabilities, malware, secrets (embedded passwords and credentials), misconfigurations, and other risks. Because SBOMs are stored in a repository, users can then continually monitor SBOMs for new vulnerabilities that arise, even post-deployment.
Detect SBOM Drift
Anchore Enterprise detects SBOM drift in the build process, identifying changes in SBOMs so they can be assessed for new risks or malicious activity. Users can set policy rules that alert them when components are added, changed, or removed so that they can quickly identify new vulnerabilities, developer errors, or malicious efforts to infiltrate builds. See SBOM Drift for more information.
Meet Compliance Requirements
Using the Anchore Enterprise UI or API, users can review SBOMs, generate reports, and export SBOMs as a JSON file. Anchore Enterprise can also export aggregated SBOMs for entire applications that can then be shared externally to meet customer and federal compliance requirements.
Customized Policy Rules
Anchore Enterprise’s high-fidelity SBOMs provide users with a rich set of metadata that can be used in customized policies.
Reduce False Positives
The extensive information provided in SBOMs generated by Anchore Enterprise allows for more accurate vulnerability matching for higher accuracy and reduced false positives.
Vulnerability and Security Scanning
Vulnerability and security scanning is an essential part of any vulnerability management strategy. Anchore Enterprise enables you to scan for vulnerabilities and security risks at any stage of your software development process, including source code repositories, CI/CD pipelines, container registries, and container runtime environments. By scanning at each stage in the process, you will find vulnerabilities and other security risks earlier and avoid delaying software delivery.
Continuous Scanning and Analysis
Anchore Enterprise provides continuous and automated scanning of an application’s code base, including related artifacts such as containers and code repositories. Anchore Enterprise starts the scanning process by generating and storing a high-fidelity SBOM that identifies all of the open source and proprietary components and their direct and transitive dependencies. Anchore uses this detailed SBOM to accurately identify vulnerabilities and security risks.
Identifying Zero-Day Vulnerabilities
When a zero-day vulnerability arises, Anchore Enterprise can instantly identify which components and applications are impacted by simply re-analyzing your stored SBOMs. You don’t need to re-scan applications or components.
Multiple Vulnerability Feeds
Anchore Enterprise uses a broad set of vulnerability data sources, including the National Vulnerability Database, GitHub Security Advisories, feeds for popular Linux distros and packages, and an Anchore-curated dataset for suppression of known false-positive vulnerability matches. See Data Feeds Overview for more information.
Precision Vulnerability Matching
Anchore Enterprise applies a best-in-class precision matching algorithm to select vulnerability data from the most accurate feed source. For example, when Anchore’s detailed SBOM data identifies that there is a specific Linux distro, such as RHEL, Anchore Enterprise will automatically use that vendor’s feed source instead of reporting every Linux vulnerability. Anchore’s precision vulnerability matching algorithm reduces false positives and false negatives, saving developer time. See the Managing False Positives section within this topic for additional ways that Anchore Enterprise reduces false positives.
Vulnerability Management and Remediation
Focusing solely on identifying vulnerability and security issues without remediation is not good enough for today’s modern DevSecOps teams. Anchore Enterprise combines the power of a rich set of SBOM metadata, reporting, and policy management capabilities to enable customers to remediate issues with the flexibility and granularity needed to mitigate disruption or slow down software production.
Managing False Positives
Anchore Enterprise provides a number of innovative capabilities to help reduce the number of false positives and optimize the signal-to-noise ratio. It starts with accurate component identification through Anchore’s high-fidelity SBOMs and a precision vulnerability matching algorithm for fewer false positives. In addition, allowlists and temporary allowlists provide for exceptions, reducing ongoing alerts. Lastly, Anchore Enterprise enables users to correct the false positives to avoid being raised in subsequent scans. “Corrections” help increase results in accuracy over time and lower signal-to-noise ratio.
Flexible Policy Enforcement
Anchore Enterprise enables users to define automated rules that indicate which vulnerabilities violate their organizations’ policies. For example, an organization may raise policy violations for vulnerabilities scored as Critical or High that have a fix available. These policy violations can generate alerts and notifications or be used to stop builds in the CI/CD pipeline or prevent code from moving to production. Policy enforcement can be applied at any stage in the development process, from the selection and usage of open source components through the build, staging, and deployment process. See Policy for more information.
Streamlined Remediation
Anchore Enterprise provides capabilities to automatically alert developers of issues through their existing tools, such as Jira or Slack. It also lets users define actionable remediation workflows with automated remediation recommendations.
Open Source Security, Dependencies, and Licenses
Anchore Enterprise gives users the ability to identify and track open source dependencies that are incorporated at any stage in the software lifecycle. Anchore Enterprise scans source code repositories, CI/CD pipelines, and container registries to generate SBOMs that include both direct and transitive dependencies and to identify exactly where those dependencies are found.
Anchore Enterprise also identifies the relevant open source licenses and enables users to ensure that the open source components used along with their dependencies are compliant with all license requirements. License policies are customizable and can be tailored to fit each organization’s open source requirements.
Compliance with Standards
Anchore Enterprise provides a flexible policy engine that enables you to identify and alert on the most important vulnerabilities and security issues, and to meet internal or external compliance requirements. You can leverage out-of-the-box policy packs for common compliance standards, or create custom policies for your organization’s needs. You can define rules against the most complete set of metadata and apply policies at the most granular level with different rules for different applications, teams, and pipelines.
Anchore offers out-of-the-box policy packs to help you comply with NIST and CIS standards that are foundational for such industry-specific standards as HIPAA and PCI DSS.
Flexible Policy Enforcement
Policies are flexible and provide both notifications and gates to prevent code from moving along the development pipeline or into production based on your criteria. You can define policy rules for image and file metadata, file contents, licenses, and vulnerability scoring. And you can define unique rules for each team, for each application, and for each pipeline.
Automated Rules
Anchore Enterprise enables users to define automated rules that indicate which vulnerabilities
violate their organization’s policies. For example, an organization may raise policy violations for vulnerabilities scored as Critical or High that have a fix available. These policy violations can generate alerts and notifications or be used to stop builds in the CI/CD pipeline or prevent code from moving to production. You can apply policy enforcement at any stage in the development process from the selection and usage of open source components through the build, staging, and deployment process.
Anchore Enterprise Policy Packs
Anchore Enterprise provides the following out-of-the-box policy bundles that automate checks for common compliance programs, standards, and laws including CIS, NIST, FedRAMP, CISA vulnerabilities, , and more. Policy Packs comprise bundled policies and are flexible so that you can modify them to meet your organization’s requirements.
FedRAMP
The FedRAMP Policy validates whether container images scanned by Anchore Enterprise are compliant with the FedRAMP Vulnerability Scanning Requirements and also validates them against FedRAMP controls specified in NIST 800-53 Rev 5 and NIST 800-190.
DISA Image Creation and Deployment Guide
The DISA Image Creation and Deployment Guide Policy provides security and compliance checks that align with specific NIST 800-53 and NIST 800-190 security controls and requirements as described in the Department of Defense (DoD) Container Image Creation and Deployment Guide.
DoD Iron Bank
The DoD Iron Bank Policy validates images against DoD security and compliance requirements in alignment with U.S. Air Force security standards at Platform One and Iron Bank.
CIS
The CIS Policy validates a subset of security and compliance checks against container image best practices and NIST 800-53 and NIST 800-190 security controls and requirements. To expand CIS security controls, you can customize the policies in accordance with CIS Benchmarks.
NIST
The NIST policy validates content against NIST 800-53 and NIST 800-190.
Anchore Enterprise is a distributed application that runs on supported container runtime platforms. The product is deployed as a series of containers that provide services whose functions are made available through APIs. These APIs can be consumed directly via included clients such as the AnchoreCTL and GUI or via out-of-the-box integrations for use with container registries, Kubernetes, and CI/CD tooling. Alternatively, users can interact directly with the APIs for custom integrations.
Services
The following sections describe the services within Anchore Enterprise.
APIs
Enterprise Public API
The Enterprise API is the primary RESTful API for Anchore Enterprise and is the one used by AnchoreCTL, the GUI and integrations. This API is used to upload data such as a software bill of materials (SBOM) and container images, execute functions and retrieve information (SBOM data, vulnerabilities and the results of policy evaluations). The API also exposes the user and account management functions. See Using the Anchore API for more information.
Stateful Services
Data Syncer
The Anchore Enterprise Data Syncer downloads and normalizes data from external sources and makes it available to the Anchore Enterprise. It communicates with the hosted Anchore Data Service to fetch the following datasets:
vulnerability_db: Contains vulnerability data from the NVD, Red Hat, and other sources.
vulnerability_annotations: Contains vulnerability annotations from CISA KEV (Known Exploited Vulnerabilities) that are used to provide additional context to vulnerabilities.
malware_signatures: Contains malware signatures from ClamAV that are used to detect malware in images.
epss_db: Contains exploit prediction scores and percentiles for vulnerabilities.
Policy Engine
The policy engine is responsible for loading an SBOM and associated content and then evaluating it against a set of policy rules. This resulting policy evaluation is then passed to the Catalog service. The policies are stored as a series of JSON documents that can be uploaded and downloaded via the Enterprise API or edited via the GUI.
Catalog
The catalog is the primary state manager of the system and provides data access to system services from the backend database service (PostgreSQL).
SimpleQueue
The SimpleQueue is another PostgreSQL-backed queue service that the other components use for task execution, notifications, and other asynchronous operations.
Workers
Analyzers
An Analyzer is the component that generates an SBOM from an artifact or source repo (which may be passed through the API or pulled from a registry), performs the vulnerability and policy analysis, and stores the SBOM and the results of the analysis in the organization’s Anchore Enterprise repository. Alternatively the AnchoreCTL client can be used to locally scan and generate the SBOM and then pass the SBOM to the analyzers via the API. Each Analyzer can process one container image or source repo at a time. You can increase the number of Analyzers (within the limits of your contract) to increase the throughput for the system in order to process multiple artifacts concurrently.
Clients
AnchoreCTL
AnchoreCTL is a Go-based command line client for Anchore Enterprise. It can be used to send commands to the backend API as part of manual or automated operations. It can also be used to generate SBOM content that is local to the machine it is run on.
AnchoreCTL is the recommended client for developing custom integrations with CI/CD systems or other situations where local processing of content needs to be performed before being passed to the Enterprise API.
Anchore Enterprise GUI
The Anchore Enterprise GUI is a front end to the API services and simplifies many of the processes associated with creating policies, viewing SBOMs, creating and running reports, and configuring the overall system (notifications, users, registry credentials, and more).
Integrations
External systems can integrate with Anchore Enterprise using software entities that exercise select parts of the Anchore Enterprise API.
Such software entities can be executable agents or plugins. We use the generic term integration instance to refer to such a deployed software entity.
Enterprise can receive health reports from integration instances to track and monitor their status (assuming the integration instance implements that).
Kubernetes Admission Controller
The Kubernetes Admission Controller is a plugin that can be used to intercept a container image as it is about to be deployed to Kubernetes. The image is passed to Anchore Enterprise which analyzes it to determine if it meets the organization’s policy rules. The policy evaluation result can then allow, warn, or block the deployment.
Kubernetes and ECS Runtime Inventory
anchore-k8s-invetory and anchore-ecs-inventory are agents that creates an ongoing inventory of the images that are running in a Kubernetes or ECS cluster. The agentes run inside the runtime environment (under a service account) and connects to the local runtime API. The agents poll the API on an interval to retrieve a list of container images that are currently in use.
Multi-Tenancy
Accounts
Accounts in Anchore Enterprise are a boundary that separates data, policies, notifications, and users into a distinct domain. An account can be mapped to a team, project, or application that needs its own set of policies applied to a specific set of content. Users may be granted access to multiple accounts.
Users
Users are local to an account and can have roles as defined by RBAC. Usernames must be unique across the entire deployment to enable API-based authentication requests. Certain users can be configured such that they have the ability to switch context between accounts, akin to a super user account.
Anchore takes a data-driven approach to analysis and policy enforcement. The system has the following discrete phases for each image analyzed:
Fetch the image content and extract it, but never execute it.
Analyze the image by running a set of Anchore analyzers over the image content to extract and classify as much metadata as possible.
Save the resulting analysis in the database for future use and audit.
Evaluate policies against the analysis result, including vulnerability matches on the artifacts discovered in the image.
Update to the latest external data used for policy evaluation and vulnerability matches (feed sync), and automatically update image analysis results against any new data found upstream.
Notify users of changes to policy evaluations and vulnerability matches.
Repeat step 5 and 6 on intervals to ensure you have the latest external data and updated image evaluations.
The primary interface is a RESTful API that provides mechanisms to request analysis, policy evaluation, and monitoring of images in registries as well as query for image contents and analysis results. Anchore Enterprise also provides a command-line interface (CLI), and its own container.
The following modes provide different ways to use Anchore within the API:
Interactive Mode - Use the APIs to explicitly request an image analysis, or get a policy evaluation and content reports. The system only performs operations when specifically requested by a user.
Watch Mode - Use the APIs to configure Anchore Enterprise to poll specific registries and repositories/tags to watch for new images, and then automatically pull and evaluate them. The API sends notifications when a state changes for a tag’s vulnerability or policy evaluation.
Anchore can be easily integrated into most environments and processes using these two modes of operation.
Once an image is submitted to Anchore Enterprise for analysis, Anchore Enterprise will attempt to retrieve metadata about the image from the Docker registry and, if successful, will download the image and queue the image for analysis.
Anchore Enterprise can run one or more analyzer services to scale out processing of images. The next available analyzer worker will process the image.
During analysis, every package, software library, and file are inspected, and this data is stored in the Anchore database.
Anchore Enterprise includes a number of analyzer modules that extract data from the image including:
Image metadata
Image layers
Operating System Package Data (RPM, DEB, APKG)
File Data
Ruby Gems
Node.JS NPMs
Java Archives
Python Packages
.NET NuGet Packages
File content
Once a tag has been added to Anchore Enterprise, the repository will be monitored for updates to that tag. See Image and Tag Watchers for more information about images and tags.
Any updated images will be downloaded and analyzed.
A Docker or OCI image is composed of layers. Some of the layers are created during a build process such as following instructions in a Dockerfile. But many of the layers will come from previously built images. These images likely come from a container team at your organization, or maybe build directly on images from a Linux distribution vendor. In some cases this chain could be many images deep as various teams add standard software or configuration.
Docker uses the FROM clause to denote an image to use as a basis for building a new image. The image provided in this clause is known by Docker as the Parent Image, but is commonly referred to as the Base Image. This chain of images built from other images using the FROM clause is known as an Image’s ancestry.
Note Docker defines Base Image as an image with a FROM SCRATCH clause. Anchore does NOT follow this definition, instead following the more common usage where Base Image refers to the image that a given image was built from.
Example Ancestry
The following is an example of an image with multiple ancestors
A base distro image, for example debian:10
FROM scratch
...
A framework container image from that debian image, for example a node.js image let’s call mynode:latest
FROM debian:10
# Install nodejs
The application image itself built from the framework container, let’s call it myapp:v1
FROM mynode:latest
COPY ./app /
...
These dockerfiles generate the following ancestry graph:
Where debian:10 is the parent of mynode:latest which is the parent of myapp:v1
Anchore compares the layer digests of images that it knows about to determine an images ancestry. This ensures that the exact image used to build a new image is identified.
Given our above example, we may see the following layers for each image. Note that each subsequent image is a superset of the previous images layers
Anchore automatically calculates an image’s ancestry as images are scanned. This works by comparing the layer digests of each image to calculate the entire chain of images that produced a given image. The entire ancestry can be retrieved for an image through the GET /v2/images/{image_digest}/ancestors API. See the API docs for more information on the specifics.
Base Image
It is often useful to compare an image with another image in its ancestry. For example to filter out vulnerabilities that are present in a “golden image” from a platform team and only showing vulnerabilities introduced by the application being built on the “golden image”.
Controlling the Base Image
Users can control which ancestor is chosen as the base image by marking the desired image(s) with a special annotation anchore.user/marked_base_image. The annotation should be set to a value of true, otherwise it will be ignored. This annotation is currently restricted to users in the “admin” account.
If an image with this annotation should no longer be considered a Base Image than you must update the annotation to false, as it is not currently possible to remove annotations.
Usage of this annotation when calculating the Base Image can be disabled by setting services.policy_engine.enable_user_base_image to false in the configuration file (see deployment specific docs for configuring this setting).
Anchorectl Example
You can add an image with this annotation using AnchoreCTL with the following:
Anchore will automatically calculate the Base Image from an image’s ancestry using the closest ancestor. From our example above, the Base Image for myapp:v1 is mynode:latest.
The first ancestor with this annotation will be used as the Base Image, if no ancestors have this annotation than it will fall back to using the closest ancestor (the Parent Image).
The rules for determining the Base Image are encoded in this diagram
graph
start([start])-->image
image[image]
image-->first_parent_exists
first_parent_exists{Does this image have a parent?}
first_parent_exists-->|No|no_base_image
first_parent_exists-->|yes|first_parent_image
first_parent_image[Parent Image]
first_parent_image-->config
config{User Base Annotations Enabled in configuration?}
config-->|No|base_image
config-->|yes|check_parent
check_parent{Parent has anchore.user/marked_base_image: true annotation}
check_parent-->|No|parent_exists
parent_exists{Does the parent image have a parent?}
parent_exists-->|Yes|parent_image
parent_image[/Move to next Parent Image/]
parent_image-->check_parent
parent_exists-->|No|no_base_image
check_parent-->|Yes|base_image
base_image([Found Base Image])
no_base_image([No Base Image Exists])
Using the Base Image
The Policy evaluation and Vuln Scan APIs have an optional base_digest parameter that is used to provide comparison data between two images. These APIs can be used in conjunction with the ancestry API to perform comparisons to the Base Image so that application developers can focus on results in their direct control. As of Enterprise v5.7.0, a special value auto can also be specified for this parameter to have the system automatically determine which image to use in the comparison based on the above rules.
To read more about the base comparison features, jump to
In addition to these user facing APIs, a few parts of the system utilize the Ancestry information.
The Ancestry Policy Gate uses the Base Image rules to determine which image to evaluate against
Reporting uses the Base Image to calculate the “Inherited From Base” column for vulnerabilities
The UI displays the Base Image and uses it for Policy Evaluations and Vulnerability Scans
Additional notes about ancestor calculations
An image B is only a child of Image A if All of the layers of Image A are present in Image B.
For example, mypython and mynode represent two different language runtime images built from a debian base. These two images are not ancestors of each other because the layers in mypython:latest are not a superset of the layers in mynode:latest, nor the other way around.
But these images could be based on a 3rd image which is made up of the 2 layers that they share. If Anchore knows about this 3rd image it would show up as an ancestor for both mypython and mynode.
The Anchore UI would identify the debian:10 image as an ancestor of the mypython:latest and the mynode:latest images. But we do not currently expose child ancestors, so we would not show the children of the debian:10 image.
This feature provides a mechanism to compare the policy checks for an image with those of a Base Image. You can read more about Base Image and how to
find them here. Base comparison uses the same policy and tag to evaluate both images to
ensure a fair comparison. The API yields a response similar to the policy checks API with an additional element within each triggered gate check to
indicate whether the result is inherited from the Base Image.
Usage
This functionality is currently available via the Enterprise UI and API.
API
Refer to API Access section for the API specification. The policy check API (GET /v2/images/{imageDigest}/check) has
an optional base_digest query parameter that can be used to specify an image to compare policy findings to. When this query parameter is provided
each of the finding’s inherited_from_base field will be filled in with true or false to denote if the finding is present in the provided image.
If no image is provided than the inherited_from_base field will be null to indicate no comparison was performed.
Example request using curl to retrieve policy check for an image digest sha256:xyz and tag p/q:r and compare the results to a Base Image digest sha256:abc
curl -X GET -u {username:password} "http://{servername:port}/v2/images/sha256:xyz/check?tag=p/q:r&base_digest=sha256:abc"
Example output:
{"image_digest":"sha256:xyz","evaluated_tag":"p/q:r","evaluations":[{"comparison_image_digest":"sha256:abc","details":{"findings":[{"trigger_id":"41cb7cdf04850e33a11f80c42bf660b3","gate":"dockerfile","trigger":"instruction","message":"Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check","action":"warn","policy_id":"48e6f7d6-1765-11e8-b5f9-8b6f228548b6","recommendation":"","rule_id":"312d9e41-1c05-4e2f-ad89-b7d34b0855bb","allowlisted":false,"allowlist_match":null,"inherited_from_base":true},{"trigger_id":"CVE-2019-5435+curl","gate":"vulnerabilities","trigger":"package","message":"MEDIUM Vulnerability found in os package type (APKG) - curl (CVE-2019-5435 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5435)","action":"warn","policy_id":"48e6f7d6-1765-11e8-b5f9-8b6f228548b6","recommendation":"","rule_id":"6b5c14e7-a6f7-48cc-99d2-959273a2c6fa","allowlisted":false,"allowlist_match":null,"inherited_from_base":false}]...}...}...]}
Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check is triggered by both images and hence inherited_from_base
is marked true
MEDIUM Vulnerability found in os package type (APKG) - curl (CVE-2019-5435 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5435) is not
triggered by the Base Image and therefore the value of inherited_from_base is false
1.3.1.1.2 - Compare Base Image Security Vulnerabilities
This feature provides a mechanism to compare the security vulnerabilities detected in an image with those of a Base Image. You can read more about base
images and how to find them here. The API yields a response similar to vulnerabilities API with an
additional element within each result to indicate whether the result is inherited from the Base Image.
Usage
This functionality is currently available via the Enterprise UI and API. Watch this space as we add base comparison support in other tools.
API
Refer to API Access section for the API specification. The vulnerabilities API GET /v2/images/{image_digest}/vuln/{vtype}
has a base_digest query parameter that can be used to specify an image to compare vulnerability findings to. When this query parameter is provided
an additional inherited_from_base field is provided for each vulnerability.
Example request using curl to retrieve security vulnerabilities for an image digest sha:xyz and compare the results to a Base Image digest sha256:abc
curl -X GET -u {username:password} "http://{servername:port}/v2/images/sha256:xyz/vuln/all?base_digest=sha256:abc"
Note that inherited_from_base is a new element in the API response added to support base comparison. The assigned boolean value indicates whether the
exact vulnerability is present in the Base Image. In the above example
CVE-2018-16842 affects libcurl-7.61.1-r3 package in both images, hence inherited_from_base is marked true
CVE-2019-5482 affects apache2-2.4.43-r0 package does not affect the Base Image and therefore inherited_from_base is set to false
1.3.1.2 - Image Analysis Process
There are two types of image analysis:
Centralized Analysis
Distributed Analysis
Image analysis is performed as a distinct, asynchronous, and scheduled
task driven by queues that analyzer workers periodically poll.
Image analysis_status states:
stateDiagram
[*] --> not_analyzed: analysis queued
not_analyzed --> analyzing: analyzer starts processing
analyzing --> analyzed: analysis completed successfully
analyzing --> analysis_failed: analysis fails
analyzing --> not_analyzed: re-queue by timeout or analyzer shutdown
analysis_failed --> not_analyzed: re-queued by user request
analyzed --> not_analyzed: re-queued for re-processing by user request
Centralized Analysis
The analysis process is composed of several steps and utilizes several
system components. The basic flow of that task as shown in the following example:
Centralized analysis high level summary:
sequenceDiagram
participant A as AnchoreCTL
participant R as Registry
participant E as Anchore Deployment
A->>E: Request Image Analysis
E->>R: Get Image content
R-->>E: Image Content
E->>E: Analyze Image Content (Generate SBOM and secret scans etc) and store results
E->>E: Scan sbom for vulns and evaluate compliance
The analyzers operate in a task loop for analysis tasks as shown below:
Adding more detail, the API call trace between services looks similar to the following example flow:
Distributed Analysis
In distributed analysis, the analysis of image content takes place outside the Anchore deployment and the result is imported
into the deployment. The image has the same state machine transitions, but the ‘analyzing’ processing of an imported analysis
is the processing of the import data (vuln scanning, policy checks, etc) to prepare the data for internal use, but does not download or touch any image content.
High level example with AnchoreCTL:
sequenceDiagram
participant A as AnchoreCTL
participant R as Registry/Docker Daemon
participant E as Anchore Deployment
A->>R: Get Image content
R-->>A: Image Content
A->>A: Analyze Image Content (Generate SBOM and secret scans etc)
A->>E: Import SBOM, secret search, fs metadata
E->>E: Scan sbom for vulns and evaluate compliance
Anchore Enterprise provides malware scanning with the use ClamAV. ClamAV is an open-source antivirus solution designed to detect malicious code embedded in container images.
When enabled, malware scanning occurs with Centralized Analysis, when the image content itself is available.
Any findings are available via:
Please Note: Files in an image which are greater than 2GB will be skipped due to a limitation in ClamAV. Any skipped file will be identified with a Malware Signature as ANCHORE.FILE_SKIPPED.MAX_FILE_SIZE_EXCEEDED.
Signature DB Updates
Each analyzer service will run a malware signature update before analyzing each image. This does add some latency to the overall analysis time but ensures the signatures
are as up-to-date as possible for each image analyzed. The update behavior can be disabled if you prefer to manage the freshness of the db via another route, such as a shared filesystem
mounted to all analyzer nodes that is updated on a schedule. See the configuration section for details on disabling the db update.
The status of the db update is present in each scan output for each image.
Scan Results
The malware content type is a list of scan results. Each result is the run of a malware scanner, by default clamav.
The list of files found to contain malware signature matches is in the findings property of each scan result. An empty array value indicates no matches found.
The metadata property provides generic metadata specific to the scanner. For the ClamAV implementation, this includes the version data about the signature db used and
if the db update was enabled during the scan. If the db update is disabled, then the db_version property of the metadata will not have values since the only way to get
the version metadata is during a db update.
Anchore has the capability to monitor external Docker Registries for updates to tags as well as new tags. It also watches for updates to vulnerability databases and package metadata (the “Feeds”).
Repository Updates: New Tags
The process for monitoring updates to repositories, the addition of new tag names, is done on a duty cycle and performed by the Catalog component(s). The scheduling and tasks are driven by queues provided by the SimpleQueue service.
Periodically, controlled by the cycle_timers configuration in the config.yaml of the catalog, a process is triggered to list all the Repository Subscription records in the system and for each record, add a task to a specific queue.
Periodically, also controlled by the cycle_timers config, a process is triggered to pick up tasks off that queue and process repository scan tasks. Each task looks approximately like the following:
The output of this process is new tag_update subscription records, which are subsequently processed by the Tag Update handlers as described below. You can view the tag_update subscriptions using AnchoreCTL:
anchorectl subscription list -t tag_update
Tag Updates: New Images
To detect updates to tags, mapping of a new image digest to a tag name, Anchore periodically checks the registry and downloads the tag’s image manifest to compare the computed digests. This is done on a duty cycle for every tag_update subscription record. Therefore, the more subscribed tags exist in the system, the higher the load on the system to check for updates and detect changes. This processing, like repository update monitoring, is performed by the Catalog component(s).
The process, the duty-cycle of which is configured in the cycle_timers section of the catalog config.yaml is described below:
As new updates are discovered, they are automatically submitted to the analyzers, via the image analysis internal queue, for processing.
The overall process and interaction of these duty cycles works like:
Anchore Enterprise is a data intensive system. Storage consumption grows with the number of images analyzed, which leaves the
following options for storage management:
Over-provisioning storage significantly
Increasing capacity over time, resulting in downtime (e.g. stop system, grow the db volume, restart)
Manually deleting image analysis to free space as needed
In most cases, option 1 only works for a while, which then requires using 2 or 3. Managing storage provisioned for a
postgres DB is somewhat complex and may require significant data copies to new volumes to grow capacity over time.
To help mitigate the storage growth of the db itself, Anchore Enterprise already provides an object storage subsystem that
enables using external object stores like S3 to offload the unstructured data storage needs to systems that are
more growth tolerant and flexible. This lowers the db overhead but does not fundamentally address the issue of unbounded
growth in a busy system.
The Analysis Archive extends the object store even further by providing a system-managed way to move an image analysis
and all of its related data (policy evaluations, tags, annotations, etc) and moving it to a location outside of the main
set of images such that it consumes much less storage in the database when using an object store, perserves the last
state of the image, and supports moving it back into the main image set if it is needed in the future without requiring
that the image itself be reanalzyed–restoring from the archive does not require the actual docker image to exist at all.
To facilitate this, the system can be thought of as two sets of analysis with different capabilities and properties:
Working Set Images
The working set is the set of images in the ‘analyzed’ state in the system. These images are stored in the database,
optionally with some data in an external object store. Specifically:
State = ‘analyzed’
The set of images available from the /images api routes
Available for policy evaluation, content queries, and vulnerability updates
Archive Set Images
The archive set of images are image analyses that reside almost entirely in the object store, which can be configured to
be a different location than the object store used for the working set, with minimal metadata in the anchore DB necessary
to track and restore the analysis back into the working set in the future. An archived image analysis preserves all the
annotations, tags, and metadata of the original analysis as well as all existing policy evaluation histories, but
are not updated with new vulnerabilities during feed syncs and are not available for new policy evaluations or content
queries without first being restored into the working set.
Not listed in /images API routes
Cannot have policy evaluations executed
No vulnerability updates automatically (must be restored to working set first)
Available from the /archives/images API routes
Point-in-time snapshot of the analysis, policy evaluation, and vulnerability state of an image
Independently configurable storage location (analysis_archive property in the services.catalog property of config.yaml)
Small db storage consumption (if using external object store, only a few small records, bytes)
Able to use different type of storage for cost effectiveness
Can be restored to the working set at any time to restore full query and policy capabilities
The archive object store is not used for any API operations other than the restore process
An image analysis, identified by the digest of the image, may exist in both sets at the same time, they are not mutually
exclusive, however the archive is not automatically updated and must be deleted an re-archived to capture updated state
from the working set image if desired.
Benefits of the Archive
Because archived image analyses are stored in a distinct object store and tracked with their own metadata in the db, the
images in that set will not impact the performance of working set image operations such as API operations, feed syncs, or
notification handling. This helps keep the system responsive and performant in cases where the set of images that you’re
interested in is much smaller than the set of images in the system, but you don’t want to delete the analysis because it
has value for audit or historical reasons.
Leverage cheaper and more scalable cloud-based storage solutions (e.g. S3 IA class)
Keep the working set small to manage capacity and api performance
Ensure the working set is images you actively need to monitor without losing old data by sending it to the archive
Automatic Archiving
To help facilitate data management automatically, Anchore supports rules to define which data to archive and when
based on a few qualities of the image analysis itself. These rules are evaluated periodically by the system.
Anchore supports both account-scoped rules, editable by users in the account, and global system rules, editable only by
the system admin account users. All users can view system global rules such that they can understand what will affect
their images but they cannot update or delete the rules.
The process of automatic rule evaluation:
The catalog component periodically (daily by default, but configurable) will run through each rule in the system and
identify image digests should be archived according to either account-local rules or system global rules.
Each matching image analysis is added to the archive.
Each successfully added analysis is deleted from the working set.
For each digest migrated, a system event log entry is created, indicating that the image digest was moved to the
archive.
Archive Rules
The rules that match images are provide 3 selectors:
Analysis timestamp - the age of the analysis itself, as expressed in days
Source metadata (registry, repo, tag) - the values of the registry, repo, and tag values
Tag history depth – the number of images mapped to a tag ordered by detected_at timestamp (the time at which the
system observed the mapping of a tag to a specific image manifest digest)
Rule scope:
global - these rules will be evaluated against all images and all tags in the system, regardless of the owning account.
(system_global = true)
account - these rules are only evaluated against the images and tags of the account which owns the rule. (system_global = false)
selector: a json object defining a set of filters on registry, repository, and tag that this rule will apply to.
Each entry supports wildcards. e.g. {"registry": "*", "repository": "library/*", "tag": "latest"}
tag_versions_newer: the minimum number of tag->digest mappings with newer timestamps that must be preset for this rule to
match an image tag.
analysis_age_days: the minimum age of the analysis to match, as indicated by the ‘analyzed_at’ timestamp on the image record.
transition: the operation to perform, one of the following
archive: works on the working set and transitions to archive, while deleting the source analysis upon successful
archive creation. Specifically: the analysis will “move” to the archive and no longer be in the working set.
delete: works on the archive set and deletes the archived record on a match
exclude: a json object defining a set of filters on registry, repository, and tag, that will exclude a subset of image(s)
from the selector defined above.
expiration_days: This allows the exclusion filter to expire. When set to -1, the exclusion filter does not expire
max_images_per_account: This setting may only be applied on a single “system_global” rule, and controls the maximum number of images
allows in the anchore deployment (that are not archived). If this number is exceeded, anchore will transition (according to the transition field value)
the oldest images exceeding this maximum count.
last_seen_in_days: This allows images to be excluded from the archive, if there is a corresponding runtime inventory image, where last_seen_in_days is within the specified number of days.
This field will exclude any images last seen in X number of days regardless of whether it’s in the exclude selector.
Rule conflicts and application:
For an image to be transitioned by a rule it must:
Match at least 1 rule for each of its tag entries (either in working set if transition is archive or those in the
archive set, if a delete transition)
All rule matches must be of the same scope, global and account rules cannot interact
Put another way, if any tag record for an image analysis is not defined to be transitioned, then the analysis record is
not transitioned.
Usage
Image analysis can be archived explicitly via the API (and CLI) as well as restored. Alternatively, the API and CLI can
manage the rules that control automatic transitions. For more information see the following:
Once an image has been analyzed and its content has been discovered, categorized, and processed, the results can be evaluated against a user-defined set of checks to give a final pass/fail recommendation for an image. Anchore Enterprise policies are how users describe which checks to perform on what images and how the results should be interpreted.
A policy is made up from a set of rules that are used to perform an evaluation a container image. The rules can define checks against an image for things such as:
security vulnerabilities
package allowlists and denylists
configuration file contents
presence of credentials in image
image manifest changes
exposed ports
These checks are defined as Gates that contain Triggers that perform specific checks and emit match results and these define the things that the system can automatically evaluate and return a decision about.
For a full listing of gate, triggers, and their parameters see: Anchore Policy Checks
These policies can be applied globally or customized for specific images or categories of applications.
A policy evaluation can return one of two results:
PASSED indicating that image complies with your policy
FAILED indicating that the image is out of compliance with your policy.
Policies are the unit of policy definition and evaluation in Anchore Enterprise. A user may have multiple policies, but for a policy evaluation, the user must specify a policy to be evaluated or default to the policy currently marked ‘active’. See Policies Overview for more detail on manipulating and configuring policies.
Components of a Policy
A policy is a single JSON document, composed of several parts:
Policy Gates - The named sets of rules and actions.
Allowlists - Named sets of rule exclusions to override a match in a policy rule.
Mappings - Ordered rules that determine which policies and allowlists should be applied to a specific image at evaluation time.
Allowlisted Images - Overrides for specific images to statically set the final result to a pass regardless of the policy evaluation result.
Blocklisted Images - Overrides for specific images to statically set the final result to a fail regardless of the policy evaluation result.
Example JSON for an empty policy, showing the sections and top-level elements:
A policy contains zero or more rule sets. The rule sets in a policy define the checks to make against an image and the actions to recommend if the checks find a match.
Example of a single rule set JSON object, one entry in the rule_set array of the larger policy document:
{"name":"DefaultPolicy","version":"2","comment":"Policy for basic checks","id":"ba6daa06-da3b-46d3-9e22-f01f07b0489a","rules":[{"action":"STOP","gate":"vulnerabilities","id":"80569900-d6b3-4391-b2a0-bf34cf6d813d","params":[{"name":"package_type","value":"all"},{"name":"severity_comparison","value":">="},{"name":"severity","value":"medium"}],"trigger":"package"}]}
The above example defines a stop action to be produced for all package vulnerabilities found in an image that are severity medium or higher.
For information on how Rule Sets work and are evaluated, see: Rule Sets
Allowlists
An allowlist is a set of exclusion rules for trigger matches found during policy evaluation. An allowlist defines a specific gate and trigger_id (part of the output of a policy rule evaluation) that should have it’s action recommendation statically set to go. When a policy rule result is allowlisted, it is still present in the output of the policy evaluation, but it’s action is set to go and it is indicated that there was an allowlist match.
Allowlists are useful for things like:
Ignoring CVE matches that are known to be false-positives
Ignoring CVE matches on specific packages (perhaps if they are known to be custom patched)
Example of a simple allowlist as a JSON object from a policy:
Mappings are named rules that define which rule sets and allowlists to evaluate for a given image. The list of mappings is evaluated in order, so the ordering of the list matters because the first rule that matches an input image will be used and all others ignored.
Allowlisted images are images, defined by registry, repository, and tag/digest/imageId, that will always result in a pass status for policy evaluation unless the image is also matched in the denylisted images section.
Denylisted images are images, defined by registry, repository, and tag/digest/imageId, that will always result in a policy policy evaluation status of fail. It is important to note that denylisting an image does not short-circuit the mapping evaluation or policy evaluations, so the full set of trigger matches will still be visible in the policy evaluation result.
Denylisted image matches override any allowlisted image matches (e.g. a tag matches a rule in both lists will always be blocklisted/fail).
A policy evaluation results in a status of pass or fail and that result based on the evaluation:
The mapping section to determine which policies and allowlists to select for evaluation against the given image and tag
The output of the policies’ triggers and applied allowlists.
Denylisted images section
Allowlisted images section
A pass status means the image evaluated against the policy and only go or warn actions resulted from the policy evaluation and allowlisted evaluations, or the image was allowlisted. A fail status means the image evaluated against the policy and at least one stop action resulted from the policy evaluation and allowlist evaluation, or the image was denylisted.
The flow chart for policy evaluation:
Next Steps
Read more about the Rule Sets component of a policy.
1.3.2.2 - Rule Sets
Overview
A rule set is a named set of rules, represented as a JSON object within a Policy. A rule set is made up of rules that define a specific check to perform and a resulting action.
A Rule Set is made up of:
ID: a unique id for the rule set within the policy
Name: a human readable name to give the policy (may contain spaces etc)
A list of rules to define what to evaluate and the action to recommend on any matches for the rule
A simple example of a rule_set JSON object (found within a larger policy object):
{"name":"DefaultPolicy","version":"2","comment":"Policy for basic checks","id":"policy1","rules":[{"action":"STOP","gate":"vulnerabilities","id":"rule1","params":[{"name":"package_type","value":"all"},{"name":"severity_comparison","value":">="},{"name":"severity","value":"medium"}],"trigger":"package","recommendation":"Upgrade the package",}]}
The above example defines a stop action to be produced for all package vulnerabilities found in an image that are severity medium or higher.
Policy evaluation is the execution of all defined triggers in the rule set against the image analysis result and feed data and results in a set of output trigger matches, each of which contains the defined action from the rule definition. The final recommendation value for the policy evaluation is called the final action, and is computed from the set of output matches: stop, go, or warn.
Policy Rules
Rules define the behavior of the policy at evaluation time. Each rule defines:
Gate - example: dockerfile
Trigger - example: exposed_ports
Parameters - parameters specific to the gate/trigger to customize its match behavior
Action - the action to emit if a trigger evaluation finds a match. One of stop, go, warn. The only semantics of these values are in the aggregation behavior for the policy result.
Gates
A Gate is a logical grouping of trigger definitions and provides a broader context for the execution of triggers against image analysis data. You can think of gates as the “things to be checked”, while the triggers provide the “which check to run” context. Gates do not have parameters themselves, but namespace the set of triggers to ensure there are no name conflicts.
Triggers define a specific condition to check within the context of a gate, optionally with one or more input parameters. A trigger is logically a piece of code that executes with the image analysis content and feed data as inputs and performs a specific check. A trigger emits matches for each instance of the condition for which it checks in the image. Thus, a single gate/trigger policy rule may result in many matches in final policy result, often with different match specifics (e.g. package names, cves, or filenames…).
Trigger parameters are passed as name, value pairs in the rule JSON:
For a complete listing of gates, triggers, and the parameters, see: Anchore Policy Gates
Policy Evaluation
All rules in a selected rule_set are evaluated, no short-circuits
Rules who’s triggers and parameters find a match in the image analysis data, will “fire” resulting in a record of the match and parameters. A trigger may fire many times during an evaluation (e.g. many cves found).
Each firing of a trigger generates a trigger_id for that match
Rules may be executed in any order, and are executed in isolation (e.g. conflicting rules are allowed, it’s up to the user to ensure that policies make sense)
A policy evaluation will always contain information about the policy and image that was evaluated as well as the Final Action. The evaluation can optionally include additional detail about the specific findings from each rule in the evaluated rule_set as well as suggested remediation steps.
Policy Evaluation Findings
When extra detail is requested as part of the policy evaluation, the following data is provided for each finding produced by the rules in the evaluated rule_set.
trigger_id - An ID for the specific rule match that can be used to allowlist a finding
gate - The name of the gate that generated this finding
trigger - The name of the trigger within the Gate that generated this finding
message - A human readable description of the finding
action - One of go, warn, stop based on the action defined in the rule that generated this finding
policy_id - The ID for the rule_set that this rule is a part of
recommendation - An optional recommendation provided as part of the rule that generated this finding
rule_id - The ID of the rule that generated this finding
allowlisted - Indicates if this match was present in the applied allowlist
allowlist_match - Only provided if allowlisted is true, contains a JSON object with details about a allowlist match (allowlist id, name and allowlist rule id)
inherited_from_base - An optional field that indicates if this policy finding was present in a provided comparison image
Excerpt from a policy evaluation, showing just the policy evaluation output:
...json
"findings": [
{
"trigger_id": "CVE-2008-3134+imagemagick-6.q16",
"gate": "package",
"trigger": "vulnerabilities",
"message": "MEDIUM Vulnerability found in os package type (dpkg) - imagemagick-6.q16 (CVE-2008-3134 - https://security-tracker.debian.org/tracker/CVE-2008-3134)",
"action": "go",
"policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
"recommendation": "Upgrade the package",
"rule_id": "rule1",
"allowlisted": false,
"allowlist_match": null,
"inherited_from_base": false
},
{
"trigger_id": "CVE-2008-3134+libmagickwand-6.q16-2",
"gate": "package",
"trigger": "vulnerabilities",
"message": "MEDIUM Vulnerability found in os package type (dpkg) - libmagickwand-6.q16-2 (CVE-2008-3134 - https://security-tracker.debian.org/tracker/CVE-2008-3134)",
"action": "go",
"policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
"recommendation": "Upgrade the package",
"rule_id": "rule1",
"allowlisted": false,
"allowlist_match": null,
"inherited_from_base": false
}
]
Final Action
The final action of a policy evaluation is the policy’s recommendation based on the aggregation of all trigger evaluations defined in the policy and the resulting matches emitted.
The final action of a policy evaluation will be:
stop - if there are any triggers that match with this action, the policy evaluation will result in an overall stop.
warn - if there are any triggers that match with this action, and no triggers that match with stop, then the policy evaluation will result in warn.
go - if there are no triggers that match with either stop or warn, then the policy evaluation is result is a go. go actions have no impact on the evaluation result, but are useful for recording the results of specific checks on an image in the audit trail of policy evaluations over time
The policy findings are one part of the broader policy evaluation which includes things like image allowlists and denylists and makes a final policy evaluation status determination based on the combination of several component executions. See policies for more information on that process.
Next Steps
Read more about the Mappings component of a policy.
1.3.3 - Remediation
After Anchore analyzes images, discovers their contents and matches vulnerabilities, it can suggest possible actions that can be taken.
These actions range from adding a Healthcheck to your Dockerfile to upgrading a package version.
Since the solutions for resolving vulnerabilities can vary and may require several different forms of remediation and intervention, Anchore provides the capability to plan out your course of action.
Action Plans
Action plans group up the resolutions that may be taken to address the vulnerabilities or issues found in a particular image and provide a way for you to take action.
Currently, we support one type of Action Plan, which can be used to notify an existing endpoint configuration of those resolutions. This is a great way to facilitate communication across teams when vulnerabilities need to be addressed.
Here’s an example JSON that describes an Action Plan for notifications:
{"type":"notification","image_tag":"docker.io/alpine:latest","image_digest":"sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a","bundle_id":"anchore_default_bundle","resolutions":[{"trigger_ids":["CVE-2020-11-09-fake"],"content":"This is a Resolution for the CVE",}],"subject":"Actions required for image: alpine:latest","message":"These are some issues Anchore found in alpine:latest, and how to resolve them","endpoint":"smtp","configuration_id":"cda118f9ec63ddefb4a173a2b2a03"}
Parts:
type: The type of action plan being submitted (currently, only notification supported)
image_tag: The full image tag of the image requiring action
image_tag: The image digest of the image requiring action
bundle_id: the id of the policy bundle that discovered the vulnerabilities
resolutions: A list composed of the remediations and corresponding trigger IDs
subject: The subject line for the action plan notification
message: The body of the message for the action plan notification
endpoint: The type of notification endpoint the action plan will be sent to
configuration_id: The uuid of the notification configuration for the above endpoint
1.4 - Anchore Enterprise Data Feeds
Anchore Data Service Overview
Anchore operates a hosted service called the Anchore Data Service that serves pre-built datasets to customer Enterprise deployments.
Anchore Data Service manages four datasets:
Vulnerability Database (grypedb) - This dataset contains vulnerability data from the following sources:
Alpine
Amazon Linux
Anchore Exclusions (CVEs that Anchore has excluded from the feed)
Chain Guard
Debian
Github
Mariner
MSRC
NVD (Including the Anchore Enhancements)
Oracle
RHEL
SLES
Ubuntu
Wolfi
ClamAV Malware Database - This dataset contains malware signatures that are used to detect malware in images.
CISA Known Exploited Vulnerabilities (KEV) - This dataset contains vulnerability annotations that are used to provide additional context to vulnerabilities.
Exploit Prediction Scoring System (EPSS) - This dataset contains exploit prediction scores for vulnerabilities.
These datasets are refreshed by pipelines that run every 6 hours.
Data Syncer Service Design
Anchore Enterprise includes a service, called the Data Syncer Service, that is responsible for syncing the datasets from the Anchore Data Service and making them available for use by the rest of Anchore Enterprise.
The following two FQDNs need to be allowlisted in your network to allow the Data Syncer Service to communicate with the Anchore Data Service:
Anchore Enterprise and its components are delivered as Docker container images which can be deployed as co-located, fully distributed, or anything in-between. Anchore Enterprise can run on a single host or be deployed in a scale out pattern for increased analysis throughput.
To get up and running, jump to the following guides of your choosing:
Enterprise Container Images
Note
You need a Dockerhub PAT from Anchore Customer Success in order to download the Anchore Enterprise Container Images
This section details the general requirements for running Anchore Enterprise. For a conceptual understanding of Anchore Enterprise, please see the Overview topic prior to deploying the software.
Runtime
Anchore Enterprise requires a Docker compatible runtime (version 1.12 or higher). Deployment is supported on:
Docker Compose (for demo or proof-of-concept and small deployments)
Any Kubernetes Certified Service Provider (KSCP) as certified by the Cloud Native Computing
Foundation (CNCF) via Helm.
Any Kubernetes Certified Distribution as certified by the Cloud Native Computing Foundation
(CNCF) via Helm.
Amazon Elastic Container Service (ECS) via Helm.
Resourcing
Use-case and usage patterns will determine the resource requirements for Anchore Enterprise. When deploying via Helm, requests and limits are set in the values.yaml file. When deploying via docker compose, add reservations and limits into your docker compose file. The following recommendations can get you started:
Requests specify the desired resource amounts for the container, while limits specify the maximum resource amounts the container is allowed. We have found that setting the request and limit to the same value provides the best quality of service (QoS) from Kubernetes. We do not recommend setting limits for CPU.
We do not recommend setting less than 1 CPU unit for any containers. Less than this could result in unexpected behaviour and should only be used in testing scenarios.
For the catalog, policy and postgresql service containers, we recommend a minimum of 2 CPU units.
We do not recommend setting memory units to less than 8G except for API and UI services, where we recommend starting at 4G. Less than these values could result in OOM errors or containers restarting unexpectedly.
When considering horizontal scaling, look to maintain a 4:1 analyzer to core services (API, policy, catalog) ratio. For example, an 8 analyzer deployment should generally have 2 API, policy and catalog pods.
Database
The only service dependency strictly required by Anchore Enterprise is a PostgreSQL database (13.0 or higher) that all services connect to, but do not use for communication beyond some very simple service registration/lookup processes. The database is centralized simply for ease of management and operation. For more information, go to Anchore Enterprise Architecture.
Anchore Enterprise uses this database to provide persistent storage for image, policy and analysis data. Database storage requirements are based on the number of SBOMs and how long these need to be stored in the active set (i.e. not archived to object storage/s3 or deleted). Each SBOM and its respective packages are indexed in the DB, so SBOM complexity also requires increased database storage. Runtime adds a further requirement here.
We suggest configuring a default artifact lifecycle policy and/or archival rules and monitor Database storage usage closely according to your use-case. Size your initial DB as roughly 50MB per image in your active set and use object storage for archive (see below)
A PostgreSQL database ships with the default deployment mechanisms for Anchore Enterprise. This is often referred to as the Anchore-managed database. This can be run in a container, as configured in the example Docker Compose file and default Helm values file.
The PostgreSQL database requirement can also be provided as an external service to Anchore Enterprise. PostgreSQL compatible databases, such as Amazon RDS for PostgreSQL, can be used for highly-scalable cloud deployments.
An external PostgreSQL compatible database is broadly recommended for any production deployment
We recommend against using connection pooling for your database (such as pg_bouncer) as it has been known to cause issues with Anchore Enterprise, which does its own connection pooling using SQLAlchemy.
FIPS Enabled Hosts
If Anchore Enterprise is deployed on FIPS Enabled Hosts and Amazon RDS (including GovCloud) is hosting the Anchore database, you will be required to have PostgreSQL version 16 or higher. This is due to RHEL 9 enforcing the FIPS-140-3 requirements. Amazon RDS is only supporting EMS or TLS 1.3 with the use of PostgreSQL 16 or greater.
Network
An Anchore Enterprise deployment requires the following three categories of network access:
Service Access
Connectivity between Anchore Enterprise services, including access to an external database.
Registry Access
Network connectivity, including DNS resolution, to the registries from which Anchore Enterprise needs to download images.
Anchore Data Service Access
Anchore Enterprise requires access to the datasets in order to perform analysis and vulnerability matching. See Anchore Enterprise Data Feeds for more information.
Security
Anchore Enterprise is deployed as source repositories or container images that can be run manually using Docker Compose, Kubernetes or any other supported container platform.
By default, Anchore Enterprise does not require any special permissions. It can be run as an unprivileged container with no access to the underlying Docker host.
Note: Anchore Enterprise can be configured to pull images through the Docker Socket. However, this configuration is not recommended, as it grants the Anchore Enterprise container added privileges, and may incur a performance impact on the Docker Host.
Storage
Anchore Enterprise can be configured to depend on other storage for various artifacts. For full details on storage configuration, see Storage Configuration.
Configuration volumes:
this volume is used to provide persistent storage to the container from which it will read its configuration files, and optionally - certificates. Requirement: Less than 1MB.
[Optional] Scratch space:
this temporary storage volume is recommended but not required. During the analysis of images, Anchore Enterprise downloads and extracts all of the layers required for an image. These layers are extracted and analyzed, after which, the layers and extracted data are deleted. If a temporary storage is not configured, then the container’s ephemeral storage will be used to store temporary files. However, performance is likely be improved by using a dedicated volume.
[Optional] Layer cache:
another temporary storage volume may also be used for image-layer caching to speed up analysis. This caches image layers for re-use by analyzers when generating an SBOM / analyzing an image.
When configuring scratch and layer cache, the size of these volumes should generally be three times the uncompressed image size to be analyzed.
A temporary volume is required to work around a kernel driver bug for container hosts that use OverlayFS or OverlayFS2 storage, with a kernel older than 4.13.
[Optional] Object storage
Anchore Enterprise stores documents containing archives of image analysis data and policies as JSON documents. By default, these documents are stored within the PostgreSQL database. However, Anchore Enterprise can be configured to store archive documents in a filesystem (volume), S3 Object store, or Swift Object Store. Requirement: Number of images x 10MB (estimated).
The estimated storage requirements for object should be the total number of images x 10MB
Enterprise UI
The Anchore Enterprise UI module interfaces with Anchore API using the external API endpoint. The UI requires access to the Anchore database where it creates its own namespace for persistent configuration storage. Additionally, a Redis database deployed and managed by Anchore Enterprise through the supported deployment mechanisms is used to store session information.
Network
Ingress
The Anchore UI module publishes a web UI service by default on port 3000, however, this port can be remapped.
Egress
The Anchore UI module requires access to three network services at the minimum:
External API endpoint (typically port 8228)
Redis Database (typically port 6379)
PostgreSQL Database (typically port 5432)
Redis Service
Version 7 or higher
Optimizing your Deployment
Optimizing your Anchore deployment on Kubernetes, involves various strategies to enhance performance, reliability, and scalability. Here are some key tips:
Ensure that your Analyzer, API, Catalog, and Policy service containers have adequate CPU and memory resources. Each service has reference recommendations which can be found in the Anchore Enterprise chart values.yaml.
Integrate with monitoring tools like Prometheus and Grafana to monitor key metrics like CPU, memory usage, analysis times, and feed sync status. You can also Set up alerts for critical thresholds. Follow our guide on Prometheus and Grafana setup Monitoring guides
For large deployments, it is good practice to Schedule regular vacuuming, indexing, and performance tuning to keep the database running efficiently.
Layer caching in Docker can significantly speed up the image build process by reusing layers that haven’t changed, reducing build times and improving efficiency. Follow our guide on Layer Caching setup
Next Steps
If you feel you have a solid grasp of the requirements for deploying Anchore Enterprise, we recommend following one of our installation guides.
2.2 - Deploy using Docker Compose
In this topic, you’ll learn how to use Docker Compose to get up and running with a stand-alone Anchore Enterprise deployment.
Note
Docker Compose is only recommended for testing (e.g. demo or proof-of-concept) or small deployments. For all other usage patterns, customers should use a Helm-based deployment on K8s which enables easier scaling, modular deployment and fine-grained configuration.
Before moving further with Anchore Enterprise, it is highly recommended to read the Overview sections to gain a deeper understanding of fundamentals, concepts, and proper usage.
The following instructions assume you are using a system running Docker v1.12 or higher, and a version of Docker Compose that supports at least v2 of the docker compose configuration format.
A stand-alone deployment requires at least 16GB of RAM, and enough disk space available to support the largest container images or source repositories that you intend to analyze. It is recommended to consider three times the largest source repository or container image size. For small testing, like basic Linux distro images or database images, between 20GB and 40GB of disk space should be sufficient.
To access Anchore Enterprise, you need a valid license.yaml file that has been issued to you by Anchore. If you do not have a license yet, visit the Anchore Contact page to request one.
You need root or sudo access to the system where you will be running docker and deploying Anchore Enterprise, all commands in this document are run as root.
Getting Started
Follow the steps below to get up and running!
Step 1: Check access to images
You’ll need authenticated access to the anchore/enterprise and anchore/enterprise-ui repositories on DockerHub. Anchore Customer Success will provide a Dockerhub PAT (Personal Access Token) for access to images. Login with your Docker PAT to push and pull images from Docker Hub:
Edit the compose file to set all instances of ANCHORE_ADMIN_PASSWORD to a strong password of your choice.
- ANCHORE_ADMIN_PASSWORD=yourstrongpassword
The admin password value must be the same across all services defined in the compose file.
Then start your environment from your working directory:
# docker compose up -d
Step 3: Install AnchoreCTL
Next, we’ll install the lightweight Anchore Enterprise client tool, quickly test using the version operation, and set up a few environment variables to allow it to interact with your deployment using the admin password you defined in the previous step.
In this guide, AnchoreCTL is installed to /usr/local/bin/ and uses environment variables throughout. For more details on using and configuring AnchoreCTL, see Using AnchoreCTL.
Step 4: Verify service availability
After a few minutes (depending on system speed) Anchore Enterprise and Anchore UI services should be up and running, ready to use. You can verify the containers are running with docker compose, as shown in the following example.
# docker compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------
anchorequickstart_analyzer_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchorequickstart_anchore-db_1 docker-entrypoint.sh postgres Up 5432/tcp
anchorequickstart_api_1 /docker-entrypoint.sh anch ... Up (healthy) 0.0.0.0:8228->8228/tcp
anchorequickstart_catalog_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchorequickstart_data-syncer_1 /docker-entrypoint.sh anch ... Up (healthy) 0.0.0.0:8778->8228/tcp
anchorequickstart_notifications_1 /docker-entrypoint.sh anch ... Up (healthy) 0.0.0.0:8668->8228/tcp
anchorequickstart_policy-engine_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchorequickstart_queue_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchorequickstart_reports_1 /docker-entrypoint.sh anch ... Up (healthy) 0.0.0.0:8558->8228/tcp
anchorequickstart_reports_worker_1 /docker-entrypoint.sh anch ... Up (healthy) 0.0.0.0:55427->8228/tcp
anchorequickstart_ui-redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
anchorequickstart_ui_1 /docker-entrypoint.sh node ... Up 0.0.0.0:3000->3000/tcp
You can then run a command to get the status of the Anchore Enterprise services:
# ./anchorectl system status
✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE │ HOST ID │ URL │ UP │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer │ anchore-quickstart │ http://analyzer:8228 │ true │ available │ 5180 │ 5.18.0 │
│ policy_engine │ anchore-quickstart │ http://policy-engine:8228 │ true │ available │ 5180 │ 5.18.0 │
│ apiext │ anchore-quickstart │ http://api:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports │ anchore-quickstart │ http://reports:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports_worker │ anchore-quickstart │ http://reports-worker:8228 │ true │ available │ 5180 │ 5.18.0 │
│ data_syncer │ anchore-quickstart │ http://data-syncer:8228 │ true │ available | 5180 │ 5.18.0 │
│ simplequeue │ anchore-quickstart │ http://queue:8228 │ true │ available │ 5180 │ 5.18.0 │
│ notifications │ anchore-quickstart │ http://notifications:8228 │ true │ available │ 5180 │ 5.18.0 │
│ catalog │ anchore-quickstart │ http://catalog:8228 │ true │ available │ 5180 │ 5.18.0 │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘
Note: The first time you run Anchore Enterprise, vulnerability data will sync to the system in a few minutes.
For the best experience, wait until the core vulnerability data feeds have completed before proceeding. You can check the status of your feed sync using AnchoreCTL:
As soon as you see RecordCount values set for all vulnerability groups, the system is fully populated and ready to present vulnerability results. Note that data syncs are incremental, so the next time you start up Anchore Enterprise it will be ready immediately. The AnchoreCTL includes a useful utility that will block until the feeds have completed a successful sync:
# ./anchorectl system wait
✔ API available system
✔ Services available [10 up] system
✔ Vulnerabilities feed ready system
Step 5: Start using Anchore
To get started, you can add a few images to Anchore Enterprise using AnchoreCTL. Once complete, you can also run an additional AnchoreCTL command to monitor the analysis state of the added images, waiting until the images move into an ‘analyzed’ state.
Uncomment the following section at the bottom of the docker-compose.yaml file:
# # Uncomment this section to add a prometheus instance to gather metrics. This is mostly for quickstart to demonstrate prometheus metrics exported
# prometheus:
# image: docker.io/prom/prometheus:latest
# depends_on:
# - api
# volumes:
# - ./anchore-prometheus.yml:/etc/prometheus/prometheus.yml:z
# logging:
# driver: "json-file"
# options:
# max-size: 100m
# ports:
# - "9090:9090"
#
For each service entry in the docker-compose.yaml, change the following to enable metrics in the API for each service
ANCHORE_ENABLE_METRICS=false
to
ANCHORE_ENABLE_METRICS=true
Download the example prometheus configuration into the same directory as the docker-compose.yaml file, with name anchore-prometheus.yml:
curl https://docs.anchore.com/current/docs/deployment/anchore-prometheus.yml > anchore-prometheus.yml
docker compose up -d
Result: You should see a new container started and can access prometheus via your browser on http://localhost:9090.
Optional: Enabling Swagger UI
Uncomment the following section at the bottom of the docker-compose.yaml file:
# # Uncomment this section to run a swagger UI service, for inspecting and interacting with the system API via a browser (http://localhost:8080 by default, change if needed in both sections below)
# swagger-ui-nginx:
# image: docker.io/nginx:latest
# depends_on:
# - api
# - swagger-ui
# ports:
# - "8080:8080"
# volumes:
# - ./anchore-swaggerui-nginx.conf:/etc/nginx/nginx.conf:z
# logging:
# driver: "json-file"
# options:
# max-size: 100m
# swagger-ui:
# image: docker.io/swaggerapi/swagger-ui
# environment:
# - URL=http://localhost:8080/v2/openapi.json
# logging:
# driver: "json-file"
# options:
# max-size: 100m
Download the nginx configuration into the same directory as the docker-compose.yaml file, with name anchore-swaggerui-nginx.conf:
curl https://docs.anchore.com/current/docs/deployment/anchore-swaggerui-nginx.conf > anchore-swaggerui-nginx.conf
docker compose up -d
Result: You should see a new container started, and have access Swagger UI via your browser on http://localhost:8080.
2.3 - Deploy on Kubernetes using Helm
The supported method for deploying Anchore Enterprise on Kubernetes is with Helm. The Anchore Enterprise Helm Chart includes configuration options for a full Enterprise deployment.
Note
Always consult the chart README and release notes prior to deployment or upgrade as this contains the most current information on deployment configuration.
The chart is split into global and service specific configurations for the core features, as well as global and services specific configurations for the optional Enterprise services.
The anchoreConfig section of the values file contains the application configuration for Anchore Enterprise. This includes the database connection information, credentials, and other application settings.
Anchore services run as a kubernetes deployment when installed with the Helm chart. Each service has its own section in the values file for making customizations and configuring the kubernetes deployment spec.
Note If you are moving from the Anchore Engine Helm chart deployment to the updated Anchore Enterprise Helm chart, see here for further guidance.
Prerequisites
See the README in the chart repository for prerequisites before starting the deployment.
Installing the Chart
This guide covers deploying Anchore Enterprise on a Kubernetes cluster with the default configuration. Refer to the Configuration section of the chart README for additional guidance on production deployments.
Create the namespace: The steps to follow will require the namespace to have been created already.
Create a Kubernetes Secret for DockerHub Credentials: Generate another Kubernetes secret for DockerHub credentials. These credentials should have access to private Anchore Enterprise repositories. We recommend that you create a brand new DockerHub user for these pull credentials. Contact Anchore Support to obtain access.
Add Chart Repository & Deploy Anchore Enterprise: Create a custom values file, named anchore_values.yaml, to override any chart parameters. Refer to the Parameters section for available options.
Important: Default passwords are specified in the chart. It’s highly recommended to modify these before deploying.
Note: The RELEASE variable should not contain any dots.
Note: This command installs Anchore Enterprise with a chart-managed PostgreSQL database, which may not be suitable for production use. See the External Database section of the chart README for details on using an external database.
Post-Installation Steps: Anchore Enterprise will take some time to initialize. After the bootstrap phase, it will begin a vulnerability feed sync. Image analysis will show zero vulnerabilities, and the UI will show errors until this sync is complete. This can take several hours based on the enabled feeds. Use the following anchorectl commands to check the system status:
exportNAMESPACE=anchore
exportRELEASE=my-release
exportANCHORECTL_URL=http://localhost:8228
exportANCHORECTL_PASSWORD=$(kubectl get secret "${RELEASE}-enterprise" -o jsonpath='{.data.ANCHORE_ADMIN_PASSWORD}'| base64 -d -)kubectl port-forward -n ${NAMESPACE} svc/${RELEASE}-enterprise-api 8228:8228 # port forward for anchorectl in another terminalanchorectl system status # anchorectl defaults to the user admin, and to the password ${ANCHORECTL_PASSWORD} automatically if set
Tip: List all releases using helm list
Next Steps
Now that you have Anchore Enterprise running, you can begin to learning more about Anchore Enterprise architecture, Anchore concepts, and Anchore usage.
To learn more about Anchore Enterprise, go to Overview
To learn more about Anchore Concepts, go to Concepts
2.3.1 - Deploying Anchore Enterprise on Azure Kubernetes Service (AKS)
This document will walk you through the deployment of Anchore Enterprise in an Azure Kubernetes Service (AKS) cluster and expose it on the public Internet.
Prerequisites
A running AKS cluster with worker nodes launched. See AKS Documentation for more information on this setup.
Once you have an AKS cluster up and running with worker nodes launched, you can verity via the following command.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-nodepool1-28659018-0 Ready agent 4m13s v1.13.10
aks-nodepool1-28659018-1 Ready agent 4m15s v1.13.10
aks-nodepool1-28659018-2 Ready agent 4m6s v1.13.10
Anchore Helm Chart
Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:
Anchore Enterprise software
PostgreSQL (13 or higher)
Redis (4)
To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore, this document is intended to cover the minimum required changes to successfully deploy Anchore Enterprise in AKS.
Note: For this installation, an NGINX ingress controller will be used. You can read more about Kubernetes Ingress in AKS here.
Configurations
Make the following changes below to your anchore_values.yaml
Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.
Anchore API Service
# Pod configuration for the anchore api service.api:# kubernetes service configuration for anchore external APIservice:type:NodePortport:8228annotations:{}
Note: Changed the service type to NodePort
Anchore Enterprise UI
ui:# kubernetes service configuration for anchore UIservice:type:NodePortport:80annotations:{}sessionAffinity:ClientIP
Note: Changed service type to NodePort.
Install NGINX Ingress Controller
Using Helm, install an NGINX ingress controller in your AKS cluster.
Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.
Create a Kubernetes secret containing your license file:
We can see that NGINX ingress controller has been installed as well from the previous step. You can view the services by running the following command:
Note: The above output shows us that IP address of the NGINX ingress controller is 40.114.26.147. Going to this address in the browser will take us to the Anchore login page.
Anchore System
Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:
ANCHORECTL_URL=http://40.114.26.147/v2/ ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status
Anchore Feeds
It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with AnchoreCTL:
ANCHORECTL_URL=http://40.114.26.147/v2/ ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list
Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.
Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Vulnerability Management section of our documentation for more information.
2.3.2 - Deploying Anchore Enterprise on Amazon EKS
This section provides information on how to deploy Anchore Enterprise onto Amazon EKS. Here is recommended architecture on AWS EKS:
Prerequisites
You’ll need a running Amazon EKS cluster with worker nodes. See EKS Documentation for more information on this setup.
Once you have an EKS cluster up and running with worker nodes launched, you can verify it using the following command:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-2-164.ec2.internal Ready <none> 10m v1.14.6-eks-5047ed
ip-192-168-35-43.ec2.internal Ready <none> 10m v1.14.6-eks-5047ed
ip-192-168-55-228.ec2.internal Ready <none> 10m v1.14.6-eks-5047ed
In order to deploy the Anchore Enterprise services, you’ll then need the Helm client installed on local host.
Deployment via Helm Chart
Anchore maintains a Helm chart to simplify the software deployment process.
To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore. The following is intended to cover the recommended changes for successfully deploying Anchore Enterprise on Amazon EKS.
Configurations
The following configurations should be used when deploying on EKS.
RDS
Anchore recommends utilizing Amazon RDS for a managed database service, rather than the Anchore chart-managed postgres. For information on how to configure for an external RDS database, see Amazon RDS.
S3 Object Storage
Anchore supports the use of S3 object storage for archival of SBOMs, configuration details can be found here. Consider using the iamauto: True option to utilise IAM roles for access to S3.
PVCs
Anchore by default uses ephemeral storage for pods but we recommend configuring Analyzer scratch space, at a minimum. Further details can be found here.
Anchore generally recommends providing EBS-backed storage for analyzer scratch of the gp3 type. Note that you will need to follow the AWS guide on storing K8s volumes with Amazon EBS. Once the CSI driver is configured for your cluster, you will then need to configure your helm chart with values similar to this:
analyzer:scratchVolume:details:ephemeral:volumeClaimTemplate:metadata:{}spec:accessModes:- ReadWriteOnceresources:requests:# must be 3xANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB + analyser_cache_sizestorage:100Gi# this would refer to whatever your storage class was namedstorageClassName:"gp3"
Here is a sample manifest for use with the AWS LBC ingress:
ingress:enabled:trueapiPaths:- /v2/- /version/uiPath:/ingressClassName:albannotations:# See https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/guide/ingress/annotations.md for further customization of annotationsalb.ingress.kubernetes.io/scheme:internet-facing# If you do not plan to bring your own hostname (i.e. use the AWS supplied CNAME for the load balancer) then you can leave apiHosts & uiHosts as empty lists:apiHosts:[]uiHosts:[]# If you plan to bring your own hostname then you'll likely want to populate them as follows:# apiHosts:# - anchore.mydomain.com# uiHosts:# - anchore.mydomain.com
Note
There are alternative ways to access services within your EKS cluster besides LBC ingress.
You must also configure/change the following from ClusterIP to NodePort:
For the Anchore API Service:
# Pod configuration for the anchore engine api service.api:# kubernetes service configuration for anchore external APIservice:type:NodePortport:8228annotations:{}
For the Anchore Enterprise UI Service:
ui:# kubernetes service configuration for anchore UIservice:type:NodePortport:80annotations:{}sessionAffinity:ClientIP
For users of Amazon ALB:
Users of ALB may want to align the timeout between gunicorn & ALB. The AWS ALB Connection idle timeout defaults to 60 seconds. The Anchore Helm charts have a timeout setting that defaults to 5 seconds which should be aligned with the ALB timeout setting.
Sporatic HTTP 502 errors may be emitted by the ALB if the timeouts are not in alignment. Please see this reference:
Note
Changed timeout_keep_alive from 5 to 65 to align with the ALB’s default timeout of 60.
anchoreConfig:server:timeout_keep_alive:65
Install Anchore Enterprise
Deploy Anchore Enterprise by following the instructions here.
Verify Ingress
Run the following command for details on the deployed ingress resource using the ELB:
$ kubectl describe ingress
Name: anchore-enterprise
Namespace: default
Address: xxxxxxx-default-anchoreen-xxxx-xxxxxxxxx.us-east-1.elb.amazonaws.com
Default backend: default-http-backend:80 (<none>)Rules:
Host Path Backends
---- ---- --------
*
/v2/* anchore-enterprise-api:8228 (192.168.42.122:8228) /* anchore-enterprise-ui:80 (192.168.14.212:3000)Annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 14m alb-ingress-controller LoadBalancer 904f0f3b-default-anchoreen-d4c9 created, ARN: arn:aws:elasticloadbalancing:us-east-1:077257324153:loadbalancer/app/904f0f3b-default-anchoreen-d4c9/4b0e9de48f13daac
Normal CREATE 14m alb-ingress-controller rule 1 created with conditions [{ Field: "path-pattern", Values: ["/v2/*"]}] Normal CREATE 14m alb-ingress-controller rule 2 created with conditions [{ Field: "path-pattern", Values: ["/*"]}]
The output above shows that an ELB has been created. Next, try navigating to the specified URL in a browser:
Verify Anchore Service Status
Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:
ANCHORECTL_URL=http://xxxxxx-default-anchoreen-xxxx-xxxxxxxxxx.us-east-1.elb.amazonaws.com ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status
2.3.3 - Deploying Anchore Enterprise on Google Kubernetes Engine (GKE)
Get an understanding of deploying Anchore Enterprise on a Google Kubernetes Engine (GKE) cluster and exposing it on the public Internet.
Note when using Google Cloud, consider utilizing Cloud SQL for PostgreSQL as a managed database service.
Prerequisites
A running GKE cluster with worker nodes launched. See GKE Documentation for more information on this setup.
Once you have a GKE cluster up and running with worker nodes launched, you can verify it by using the followiing command.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-standard-cluster-1-default-pool-c04de8f1-hpk4 Ready <none> 78s v1.13.7-gke.24
gke-standard-cluster-1-default-pool-c04de8f1-m03k Ready <none> 79s v1.13.7-gke.24
gke-standard-cluster-1-default-pool-c04de8f1-mz3q Ready <none> 78s v1.13.7-gke.24
Anchore Helm Chart
Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:
Anchore Enterprise software
PostgreSQL (13 or higher)
Redis (4)
To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore. The following is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on Google Kubernetes Engine.
Note: For this deployment, a GKE ingress controller will be used. You can read more about Kubernetes Ingress with a GKE Ingress Controller here
Configurations
Make the following changes below to your anchore_values.yaml
Ingress
ingress:enabled:trueapiPaths:- /v2/*uiPath:/*
Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.
Anchore API Service
api:replicaCount:1# kubernetes service configuration for anchore external APIservice:type:NodePortport:8228annotations:{}
Note: Changed the service type to NodePort
Anchore Enterprise UI
ui:# kubernetes service configuration for anchore UIservice:type:NodePortport:80annotations:{}sessionAffinity:ClientIP
Note: Changed service type to NodePort.
Anchore Enterprise Deployment
Create Secrets
Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.
Create a Kubernetes secret containing your license file:
ANCHORECTL_URL=http://34.96.64.148 ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status
Anchore Feeds
It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with Anchore CTL:
ANCHORECTL_URL=http://34.96.64.148 ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list
Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.
Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Vulnerability Management section of our documentation for more information.
2.3.4 - Deploying Anchore Enterprise on OpenShift
This document will walkthrough the deployment of Anchore Enterprise on an OpenShift Kubernetes Distribution (OKD) 3.11 cluster and expose it on the public internet.
Note: While this document walks through deploying on OKD 3.11, it has been successfully deployed and tested on OpenShift 4.2.4 and 4.2.7.
Prerequisites
A running OpenShift Kubernetes Distribution (OKD) 3.11 cluster. Read more about the installation requirements here.
Note: If deploying to a running OpenShift 4.2.4+ cluster, read more about the installation requirements here.
Helm client and server installed and configured with your cluster.
Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise installation of the chart will include the following:
Anchore Enterprise Software
PostgreSQL (13)
Redis 17
To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore, this document is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on OKD 3.11.
OpenShift Configurations
Create a new project
Create a new project called anchore-enterprise:
oc new-project anchore-enterprise
Create secrets
Two secrets are required for an Anchore Enterprise deployment.
Create a secret for the license file:
oc create secret generic anchore-enterprise-license --from-file=license.yaml=license.yaml
Create a secret for pulling the images:
oc create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<username> --docker-password=<password> --docker-email=<email>
Verify these secrets are in the correct namespace: anchore-enterprise
oc describe secret <secret-name>
Link ImagePullSecret
Link the above Docker registry secret to the default service account:
oc secrets link default anchore-enterprise-pullcreds --for=pull --namespace=anchore-enterprise
Verify this by running the following:
oc describe sa
Note: Validate your OpenShift SCC. Based on the security constraints of your environment, you may need to change SCC. oc adm policy add-scc-to-user anyuid -z default
Anchore Configurations
Create a custom anchore_values.yaml file for your Anchore Enterprise deployment:
# NOTE: This is not a production ready values file for an openshift deployment.securityContext:fsGroup:nullrunAsGroup:nullrunAsUser:nullpostgresql:primary:containerSecurityContext:enabled:falsepodSecurityContext:enabled:falseui-redis:master:podSecurityContext:enabled:falsecontainerSecurityContext:enabled:false
Install software
Run the following command to install the software:
You can use customize your helm values.yaml file to use an existing / custom secrets rather than have help generate one for you with a generated password.
ANCHORECTL_URL=http://anchore-engine-anchore-enterprise.apps.54.84.147.202.nip.io \
ANCHORECTL_USERNAME=admin \
ANCHORECTL_PASSWORD=foobar \
anchorectl system status
#### Anchore Vulnerability dataAnchore has a datasyncer service that pulls the vulnerability and other data sources such as ClamAV malware database into your Anchore deployment. You can check on the status of these feed data using AnchoreCTL:
```shell
ANCHORECTL_URL=http://anchore-engine-anchore-enterprise.apps.54.84.147.202.nip.io \
ANCHORECTL_USERNAME=admin \
ANCHORECTL_PASSWORD=foobar \
anchorectl feed list
Note: Please continue to the Vulnerability Management section of our documentation for more information about Vulnerability Management within Anchore.
2.4 - Anchore Enterprise Cloud Image
Overview
The Anchore Enterprise Cloud Image is a fully functional machine image with an Anchore Enterprise deployment that
is pre-configured with the goal of simplifying deployment complexity for our end users.
A quick Demo on getting started with Anchore Enterprise Cloud Image
Cloud Image Manager
The Cloud Image Manager is a proprietary tool that is pre-packaged in the cloud image. It allows users to manage
their Anchore Enterprise Cloud Image deployments by walking users through the process of installing,
configuring, and upgrading. For more details please see Cloud Image Manager.
Support Limits
The Cloud Image has the following limits, independent of instance type:
10,000 Image SBOMs
Max Image Size is 10 GB
300 Report Executions
100 System Users
2 - 8 accounts per deployment depending on your Purchased Tier.
Non-supported Features
The Cloud Image does not currently support the following Anchore Enterprise features:
Anchore Enterprise Cloud Image is a fully functional Anchore Enterprise deployment that is pre-configured
and ready to use. The cloud image is currently available for our Amazon users.
For general information on the Amazon Machine Images (AMI) and how to use them, see the
Amazon EC2 documentation.
The Anchore Enterprise Cloud Image Manager is shipped as part of the AMI to aid in the installation,
configuration, and management of the Anchore Enterprise Cloud Image. For more information about the Cloud Image
Manager, see the Cloud Image Manager.
Recommendations and Requirements
The following are requirements and recommended best practices for deploying the Anchore Enterprise Cloud Image in AWS.
Memory Requirement - The Cloud Image requires a minimum of 32 GB of memory to operate.
Disk Requirement - The Cloud Image requires a minimum of 128 GB of disk space for root volume and 1 TB for data volume to operate.
Note: The data volume by default will not delete on termination of your AMI.
CPU Requirement - The Cloud Image requires a minimum of 4 vCPU to operate.
AWS Supported Instance Type
The baseline supported instance type on Amazon Web Services is the r7a.xlarge. This gives the best mix of
performance to cost for running Anchore Enterprise.
The Cloud Image Manager will not enforce the use of this instance type but will check for the minimum resources needed
to run the software. If you would like to use a different instance type, please contact Anchore Support for guidance.
For more information on AWS Instance Types Please review the following links
The Anchore Enterprise Cloud Image is running with FIPS enabled. When creating your Key Pair, you must use an RSA key.
The ED25519 key will be rejected as a non-FIPS-compliant algorithm.
Please review the Best Practices for the
Cloud Image Manager for the recommended terminal applications to use.
Anchore Cloud Image License
The Anchore Enterprise Cloud Image requires a valid license to operate. The license is provided by Anchore during the
purchase process. The license file is required to be uploaded via the Cloud Image Manager during the initial setup. Please have it available before starting the installation process.
Launching the AMI
To launch the Anchore Enterprise Cloud Image AMI, please refer to the AWS documentation on
Launch an Amazon EC2 instance.
Once the instance is launched, please review the Cloud Image Manager documentation for the next steps on
Accessing the Cloud Image Manager.
The Cloud Image Manager will walk you through the preflight checks, configuration,
and management of your Anchore Enterprise Cloud Image deployment.
Backup and Restore
It is important that you have a backup and restore strategy in place to protect your data. The Anchore Enterprise
Cloud Image Manager will prompt you to create a snapshot prior to upgrading your Anchore Enterprise Cloud Image or
expanding your disks. It is also reasonable for you to create a snapshot of your EBS volume on a regular basis.
During the course of using the product, you may wish to expand the size of your disks. It is strongly recommended
that you create a snapshot of your EBS volume prior to expanding your disks.
Once you have expanded your disk, you will need to resize the filesystem to take advantage of the additional space.
The Cloud Image Manager provides a utility to resize the filesystem. Please refer to the Cloud Image Manager
Configuration Disk Expansion for more information.
Upgrading the Cloud Image
Occasionally, Anchore will release updates to the Anchore Enterprise Cloud Image. The Cloud Image Manager will provide
you with the upgrades that are available to you and allow you to determine when you want to upgrade. It is strongly
recommended that you create a snapshot of your EBS volume prior to upgrading your Anchore Enterprise Cloud Image.
During operation of Anchore Enterprise or the Cloud Image, you may require support from Anchore Support. The
Cloud Image Manager provides you with a seamless way to generate a support bundle and upload it to Anchore Support.
The Cloud Image Manager is a proprietary tool that allows users to seamlessly manage their Anchore Enterprise
Cloud Image deployments. It walks users through the process of installing, configuring, and upgrading their
Anchore Enterprise Cloud Image deployment.
Best Practices
The Cloud Image Manager uses Textual (a TUI framework for Python) to provide
a terminal-based interface. For your best user experience, please use the following terminal emulators
when connecting to the Cloud Image Manager.
Note: We recommend against using the default macOS Terminal application as it may not render the TUI correctly. For more
information on why, please see Textual FAQ.
Accessing the Cloud Image Manager
After your instance is launched, you can access the Cloud Image Manager by connecting to the instance via SSH.
Using your private key file used for authentication (likely generated when setting up the instance) and the
public IP address of the instance, connect using the following example command:
Permissions on key file - If you get a WARNING: UNPROTECTED PRIVATE KEY FILE error, fix it by setting the
correct permissions on your key file. Run the following command to set the correct permissions:
chmod 400 ~/my-keypair.pem
Connection Issues - If you experience a Connection Timeout or Host Unreachable error, verify that the instance
is running and that the security group allows SSH traffic on port 22.
You should now be connected to the Cloud Image Manager.
Preflight Checks
The Cloud Image Manager will perform a series of preflight checks to ensure that the system is ready for installation.
These checks include ensuring that the machine image has met memory, disk space, and CPU requirements. If the system
does not meet the requirements, the preflight checks will fail and the installation will not proceed.
Initial Install
The Cloud Image Manager will walk you through the initial installation process. At the end of this process, the
Cloud Image Manager will provide you with the URL to access the Anchore Enterprise UI as well as your administrator
credentials.
Upgrade
The Cloud Image Manager will determine if there are any upgrades available for your Anchore Enterprise Cloud Image
deployment. If an upgrade is available, the Cloud Image Manager will walk you through the upgrade process. If
downtime is required, the Cloud Image Manager will notify you prior to proceeding. This will allow you to plan
for the upgrade when it is convenient for you. It is highly recommend that you take a snapshot of your EBS
volume prior to upgrade.
Configuration
The Cloud Image Manager configuration screen allows the following options:
Adding and updating the Anchore Enterprise License.
Providing any Server Certificates required for TLS access to Anchore Enterprise services.
Providing a custom Root Certificate if one is required for your environment.
Configuring any optional proxy settings required for your environment.
Disk Expansion
Re-configuring Proxy Settings
Changing Proxy settings after completing the installation process currently requires manual intervention for the settings to be fully applied.
If you must change the Proxy settings, please contact customer support for assistance.
Expanding Disks
The Cloud Image Manager provides a utility to expand the root and data volumes once your virtual hard disk has been
increased in size. This step is necessary to take advantage of the additional space. The Cloud Image Manager will
shut down Anchore Enterprise during this operation. It is highly recommend that you take a snapshot of your EBS
volume prior to any operation that may modify your disk volumes.
System Status
The Cloud Image Manager provides a system status screen that shows the current service and container status
of the Anchore Enterprise services.
It also provides the list of currently deployed versions of Anchore Enterprise, Anchore Enterprise UI as well as
the other infrastructure components that are automatically deployed within the Anchore Enterprise Cloud Image.
Support
The Cloud Image Manager provides a support screen that allows you to:
Generate a support bundle. This will result with the location of the support bundle.
Upload a generated support bundle. This will be automatically uploaded to Anchore. You must create a support
ticket and provide the Support Bundle ID and Filename to the support team.
As part of the Cloud Image deployment, you have access to Grafana data that is collected for your deployment.
This data can be used to monitor the health of your deployment. The Cloud Image Manager provides a link and
credentials to access the Grafana dashboard.
2.5 - Deploying AnchoreCTL
In this section you will learn how to deploy and configure AnchoreCTL, the Anchore Enterprise Command Line Interface.
AnchoreCTL is published as a simple binary available for download either from your Anchore Enterprise deployment or Anchore’s release site.
Using AnchoreCTL, you can manage and inspect all aspects of your Anchore Enterprise deployments, either as a manual
human-readable configuration/instrumentation/control tool or as a CLI that is designed to be used in scripted environments
such as CI/CD and other automation environments.
AnchoreCTL should be version-aligned with Anchore Enterprise for major/minor releases, refer to the Enterprise Release Notes for the supported version of AnchoreCTL
Installation
AnchoreCTL’s major and minor release version coincides with the release version of Anchore Enterprise, however patch versions may differ. For example,
Enterprise v5.18.0
AnchoreCTL v5.18.0
Important It is highly recommended that the version of AnchoreCTL you are using is supported by the deployed version of Enterprise. Please refer to the Enterprise Release Notes for the supported version of AnchoreCTL. See Local examples below where anchorectl can be downloaded from your Anchore Enterprise deployment.
MacOS / Linux
Download a local (from your Anchore deployment) or remote (from Anchore servers) version without installation:
Linux Intel/AMD64
# Localcurl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=linux&architecture=amd64" -H "accept: */*"| tar -zx anchorectl
2.6 - Anchore Enterprise in an Air-Gapped Environment
Anchore Enterprise can run in an isolated environment with no outside internet connectivity. It does require a network connection to its own components and should be able to reach registries (Docker v2 API compatible) where the images to be analyzed are hosted.
Make sure that the Anchore Enterprise images are either proxied into a local registry or available as local images on the system running Anchore Enterprise. These images should be referenced in your docker-compose.yaml or Helm values.yaml to enable installation within a private network.
To ensure that the Anchore Enterprise installation has up-to-date vulnerability data from the vulnerability sources, you will need to periodically download and import feed data into your Anchore Enterprise deployment. Details on how to do this can be found in the Air-Gapped Configuration.
For more detail regarding the Anchore Data Service, please see Anchore Data Service.
2.6.1 - Anchore Enterprise in an Air-Gapped Environment
Once you have all the required images locally, you will need to push the images to your local registry and point image location for each service to the url of the images in your registry.
We will assume we are using a Habor registry locally accessible at core.harbor.domain. Follow these steps to push the images to your local registry and deploy Anchore Enterprise:
Tag images
Since Docker images are currently tagged with docker.io, you need to retag them with your Harbor registry URL.
Replace core.harbor.domain with your actual registry domain:
docker tag docker.io/anchore/enterprise:v5.15.0 core.harbor.domain/anchore/enterprise:v5.15.1
docker tag docker.io/library/postgres:13 core.harbor.domain/library/postgres:13
docker tag docker.io/library/redis:7 core.harbor.domain/library/redis:7
docker tag docker.io/anchore/enterprise-ui:v5.15.0 core.harbor.domain/anchore/enterprise-ui:v5.15.0
With your license file and docker-compose.yaml file in the active directory, execute the following to deploy Anchore Enterprise in your air-gapped environment
docker compose up -d
2.6.2 - Anchore Enterprise in an Air-Gapped Environment
Download images locally
Follow these steps to manually transfer the images and deploy Anchore Enterprise on Docker.
Note
You need a Dockerhub PAT from Anchore Customer Success in order to pull Anchore Enterprise images
Download Images from a System with Internet Access
On a machine that has internet access, pull all the relevant Anchore images: We will assume the latest Anchore Enterprise version is v5.15, so we will be pulling down these images (make sure to pull current version as needed)
Save Images as Tar Files
Once the images are pulled, save them as a tarball so that they can be transferred to the air-gapped system. Run the following command:
Transfer this file to your offline system (using a memory stick or similar method).
Set Up and Deploy
On the air-gapped system, place the downloaded docker-compose.yaml file in your working directory, along with your license file. Make sure the docker-compose.yaml file references the images by name and tag exactly as they appear on your local system.
Now, you can deploy Anchore with:
docker compose up -d
Docker will automatically use the locally loaded images if they exist with the correct name and tag, as referenced in the docker-compose.yaml file.
With v5.11.0 release, Anchore Enterprise introduces an API so that the software entities
(agents, plugins, etc.) that integrate external systems with Enterprise can be tracked
and monitored.
As of v5.11.0, only the Kubernetes Inventory agent uses this API. Existing versions
of agents and plugins will continue to work as before but will not be possible to track and
monitor with the new functionality.
This new feature and its API has broadly two parts: integration registration and
integration health reporting. These will be discussed further below.
Terminology
Integration instance: A software entity, like an agent or plugin, that integrates and
external system with Anchore Enterprise. A deployed Kubernetes Inventory agent, Kubernetes Admission
Controller, and ECS Inventory agent are all examples of integration instances.
Integration status: The (life-cycle) status of the integration instance as perceived
by Enterprise. After registration is completed, this status is determined based on if
health reports are received or not.
Reported status: The status of the integration instance as perceived by the integration
instance itself. This is determined from the contents of the health reports, if they contain
any errors or not.
Integration registration
When an integration instance that supports integration registration and health reporting
is started it will perform registration with Anchore Enterprise. This is a kind of
handshake where the integration instance introduces itself, declaring which type it is
and presenting various other information about itself. In response, Anchore Enterprise
provides the integration instance with the uuid that identifies the integration
instance from that point onwards.
The registration request includes two identifiers: registration_id and
registration_instance_id. Anchore Enterprise maintains a record of the association
between integration uuid and <registration_id, registration_instance_id>.
If an integration instance is restarted, it will perform registration again. Assuming the
<registration_id, registration_instance_id> pair in that re-registration remains the
same as in the original registration, Enterprise will consider the integration instance
to be the same (and thus provide the integration instance with the same uuid). Should
the <registration_id, registration_instance_id> pair be different, then Enterprise will
consider the integration instance to be different and assign it a new uuid.
Integrations deployed as multiple replicas
An integration can be deployed as multiple replicas. An example is the Kubernetes Inventory agent,
which helm chart deploys it as a K8s Deployment. That deployment can be specified to have
replicas > 1 (although it is not advisable as the agent is strictly speaking not
implemented to be executed as multiple replicas, it will work but only add unnecessary load).
In such a case, each replica will have identical configuration. They will register as
integration instances and be given their own uuid. By inspecting the registration_id and
registration_instance_id it is often possible to determine if they instances are part of
the same replica set. They will then have registered with identical registration_id but
different registration_instance_id. The exception is if each integration instance
self-generated a unique registration_id that they used during registration. In that case
they cannot be identified to belong to the same replica set this way.
Integration health reporting
Once registered, an integration instance can send periodic health reports to Anchore
Enterprise. The interval between two health reports can be configured to be 30 to 600
seconds. A default values will typically be 60 seconds.
Each health report includes a uuid that identifies it and timestamp when it was sent.
These can be used when searching the integration instance’s log file during troubleshooting.
The health report also includes the uptime of the integration instance as well as an
’errors’ property that contains errors that the integration wants to make Anchore Enterprise
aware of. In addition to the above, health reports can also include data specific to the
type of integration.
Reported status derived from health reports
When Anchore Enterprise receives a health report that contains errors from an integration
instance, it will set that instance’s reportedStatus.state to unhealthy and the
reportedStatus.healthReportUuid is set to the uuid of the health report.
If subsequent health reports do no contain errors, the instance’s reportedStatus.state
is set to healthy and the reportedStatus.healthReportUuid is unset.
This is an example of what the reported status can look like from an integration instance
that sends health reports indicating errors:
{"reportedStatus":{"details":{"errors":["unable to report Inventory to Anchore account account0: failed to report data to Anchore: \u0026{Status:4","user account not found (account1) | ","unable to report Inventory to Anchore account account2: failed to report data to Anchore: \u0026{Status:4","user account not found (account3) | "],"healthReportUuid":"d676f221-0cc7-485e-b909-a5a1dd8d244e"},"reason":"Health report included errors","state":"unhealthy"}}
The details.errors list indicates that there is some issues related to ‘account0’,
‘account1’, ‘account2’ and ‘account3’. To fully triage and troubleshoot these issues one
will typically have to search the log file for the integration instance.
This is an example of reported status for case without errors:
{"reportedStatus":{"state":"healthy"}}
The below figure illustrates how the reportedStatus.state property will transition
between its states.
Integration status derived from health reports
When an integration instance registers with Anchore Enterprise, it will declare at what
interval it will send health reports. A typical value will be 60 seconds.
As long as health reports are received from an integration instance, Enterprise will consider
it to be active. This is reflected in the integration instance’s integrationStatus.state
which is set to active.
If three (3) consecutive health reports fail to be received by Anchore Enterprise, it will
set the integration instance’s integrationStatus.state to inactive.
This is an example of what the integration status can look like when health reports have
not been received from an integration instance:
{"integrationStatus":{"reason":"Integration last_seen timestamp is older than 2024-10-21 15:33:07.534974","state":"inactive","updatedAt":"2024-10-21T15:46:07Z"}}
A next step to triage this could be to check if the integration instance is actually
running or if there is some network connectivity issue preventing health reports from
being received.
This is an example of integration status when health reports are received as expected:
The below figure illustrates how the integrationStatus.state will transition between
its (lifecycle) states.
Integration instance properties
An integration instance has the below properties. Some properties may not have a value.
accountName: The account that integration instance used during registration (and thus
belongs to).
accounts: List of account names that the integration instance handles. The list is
updated from information contained in health reports from the integration instance.
For the Kubernetes Inventory agent, this list holds all accounts that the agent has
recently attempted to send inventory reports for (regardless if the attempt
succeeded or not).
clusterName: The cluster where the integration instance executes. This will typically
be a Kubernetes cluster.
description: Short arbitrary text description of the integration instance.
explicitlyAccountBound: List of account names that the integration instance is
explicitly configured to handle. This does not include account names that an
integration instance could learn dynamically. For instance, the Kubernetes Inventory agent
can learn about account names to handle via a special label set on the namespaces.
Such account names are not included in this property.
healthReportInterval: Interval in seconds between health reports from the integration
instance.
integrationStatus: The (life cycle) status of the integration instance.
lastSeen: Timestamp when the last health report was received from the integration
instance.
name: Name of the integration instance.
namespace: The namespace where the integration executes. This will typically be a
Kubernetes namespace.
namespaces: List of namespaces that the integration is explicitly configured to handle.
registrationId: Registration id that the integration instance used during registration.
registrationInstanceId: Registration instance id that the integration instance used
during registration.
"reportedStatus: The health status of the integration instance derived from information
reported in the last health report.
startedAt: Timestamp when the integration instance was started.
type: The type of the integration instance. In Enterprise v5.11.0,
k8s_inventory_agent is the only value.
uptime: Uptime (in seconds) of the integration instance.
username: Username that the integration instance registered using.
uuid: The UUID of the integration instance. Used in REST API to specify instance.
version: Software version that the integration instance runs.
3.1 - Container Registries via the API
Using the API or CLI, Anchore Enterprise can be instructed to download an image from a public or private container registry.
Anchore Enterprise will attempt to download images from any registry without requiring further configuration. However if
your registry requires authentication then the registry and corresponding credentials will need to be defined.
Anchore Enterprise can analyze images from any Docker V2 compatible registry.
Jump to the registry configuring guide for your registry:
Amazon AWS typically uses keys instead of traditional usernames & passwords. These keys consist of an access key ID and a secret access key. While it is possible to use the aws ecr get-login command to create an access token, this will expire after 12 hours so it is not appropriate for use with Anchore Enterprise, otherwise a user would need to update their registry credentials regularly. So when adding an Amazon ECR registry to Anchore Enterprise you should pass the aws_access_key_id and aws_secret_access_key.
The registry-type parameter instructs Anchore Enterprise to handle these credentials as AWS credentials rather than traditional usernames and passwords. Currently Anchore Enterprise supports two types of registry authentication standard username and password for most Docker V2 registries and Amazon ECR. In this example we specified the registry type on the command line however if this parameter is omitted then AnchoreCTL will attempt to guess the registry type from the URL which uses a standard format.
Anchore Enterprise will use the AWS access key and secret access keys to generate authentication tokens to access the Amazon ECR registry, Anchore Enterprise will manage regeneration of these tokens which typically expire after 12 hours.
In addition to supporting AWS access key credentials Anchore also supports the use of IAM roles for authenticating with Amazon ECR if Anchore Enterprise is run on an EC2 instance.
In this case you can configure Anchore Enterprise to inherit the IAM role from the EC2 instance hosting the system.
When launching the EC2 instance that will run Anchore Enterprise you need to specify a role that includes the AmazonEC2ContainerRegistryReadOnly policy.
While this is best performed using a CloudFormation template, you can manually configure from the launch instance wizard.
Step 1: Select Create new IAM role.
Step 2: Under type of trusted entity select EC2.
Ensure that the AmazonEC2ContainerRegistryReadOnly policy is selected.
Step 3: Attach Permissions to the Role.
Step 4: Name the role.
Give a name to the role and add this role to the Instance you are launching.
On the running EC2 instance you can manually verify that the instance has inherited the correct role by running the following command:
Step 5: Enable IAM Authentication in Anchore Enterprise.
By default the support for inheriting the IAM role is disabled.
To enable IAM based authentication add the following entry to the top of Anchore Enterprise config.yaml file:
allow_awsecr_iam_auto: True
Step 6: Add the Registry using the AWSAUTO user.
When IAM support is enabled instead of passing the access key and secret access key use “awsauto” for both username and password. This will instruct Anchore Enterprise to inherit the role from the underlying EC2 instance.
To use an Azure Registry, you can configure Anchore to use either the admin credential(s) or a service principal. Refer to Azure documentation for differences and how to setup each. When you’ve chosen a credential type, use the following to determine which registry command options correspond to each value for your credential type
Admin Account
Registry: The login server (Ex. myregistry1.azurecr.io)
Username: The username in the ‘az acr credential show –name ’ output
Password: The password or password2 value from the ‘az acr credential show’ command result
Service Principal
Registry: The login server (Ex. myregistry1.azurecr.io)
Username: The service principal app id
Password: The service principal password Note: You can follow Microsoft Documentation for creating a Service Principal.
To add an azure registry credential, invoke anchorectl as follows:
Once a registry has been added, any image that is added (e.g. anchorectl image add <Registry>/some/repo:sometag) will use the provided credential to download/inspect and analyze the image.
3.1.3 - Google Container Registry
When working with Google Container Registry it is recommended that you use JSON keys rather than the short lived access tokens.
JSON key files are long-lived and are tightly scoped to individual projects and resources. You can read more about JSON credentials in Google’s documentation at the following URL: Google Container Registry advanced authentication
Once a JSON key file has been created with permissions to read from the container registry then the registry should be added with the username _json_key and the password should be the contents of the key file.
In the following example a file named key.json in the current directory contains the JSON key with readonly access to the my-repo repository within the my-project Google Cloud project.
Once a registry has been added, any image that is added (e.g. anchorectl image add /some/repo:sometag) will use the provided credential to download/inspect and analyze the image.
3.1.5 - Managing Registries
Anchore Enterprise will attempt to download images from any registry without requiring further configuration.
However if your registry requires authentication then the registry and corresponding credentials will need to be defined.
Listing Registries
Running the following command lists the defined registries.
Here we can see that 3 registries have been defined. If no registry was defined Anchore Enterprise would attempt to
pull images without authentication but a registry is defined then all pulls for images from that registry will use the specified username and password.
Adding a Registry
Registries can be added using the following syntax.
The REGISTRY parameter should include the fully qualified hostname and port number of the registry. For example: registry.anchore.com:5000
Anchore Enterprise will only pull images from a TLS/SSL enabled registry. If the registry is protected with a self signed certificate or a certificated
signed by an unknown certificate authority then the --secure-conection=<true|false> parameter can be passed which instructs Anchore Enterprise not to validate the certificate.
The registry get command allows the user to retrieve details about a specific registry.
For example:
# anchorectl registry get registry.example.com
✔ Fetched registry
┌──────────────────────┬───────────────┬───────────────┬─────────────────┬──────────────────────┬─────────────┬──────────────────────┐
│ REGISTRY NAME │ REGISTRY TYPE │ REGISTRY USER │ REGISTRY VERIFY │ CREATED AT │ LAST UPATED │ REGISTRY │
├──────────────────────┼───────────────┼───────────────┼─────────────────┼──────────────────────┼─────────────┼──────────────────────┤
│ registry.example.com │ docker_v2 │ johndoe │ false │ 2022-08-25T20:58:33Z │ │ registry.example.com │
└──────────────────────┴───────────────┴───────────────┴─────────────────┴──────────────────────┴─────────────┴──────────────────────┘
In this example we can see that the registry.example.com registry was added to Anchore Enterprise on the 25th August at 20:58 UTC.
The password for the registry cannot be retrieved through the API or AnchoreCTL.
Updating Registry Details
Once a registry had been defined the parameters can be updated using the update command. This allows a registry’s username, password and secure-connection (validate TLS) parameters to be updated using the same syntax as is used in the ‘add’ operation.
A Registry can be deleted from Anchore’s configuration using the del command.
For example to delete the configuration for registry.example.com the following command should be issued:
# anchorectl registry delete registry.example.com
✔ Deleted registry
No results
Note: Deleting a registry record does not delete the records of images/tags associated with that registry.
Advanced
Anchore Enterprise attempts to perform a credential validation upon registry addition, but there are cases where a credential can be valid but the validation routine can fail (in particular, credential
validation methods are changing for public registries over time). If you are unable to add a registry but believe that the credential you are providing is valid, or you wish to add a
credential to anchore before it is in place in the registry, you can bypass the registry credential validation process using the --validate=false option to the registry add or registry update command.
3.2 - Configuring Registries via the GUI
Introduction
In this section you will learn how to configure access to registries within the Anchore Enterprise UI.
Assumptions
You have a running instance of Anchore Enterprise and access to the UI.
You have the appropriate permissions to list and create registries. This means you are either a user in the admin account, or a user that is already a member of the read-write role for your account.
The UI will attempt to download images from any registry without requiring further configuration. However, if your registry requires authentication then the registry and corresponding credentials will need to be defined.
First off, after a successful login, navigate to the Configuration tab in the main menu.
Add a New Registry
In order to define a registry and its credentials, navigate to the Registries tab within Configuration. If you have not yet defined any registries, select the Let’s add one! button. Otherwise, select the Add New Registry button on the right-hand side.
Upon selection, a modal will appear:
A few items will be required:
Registry
Type (e.g. docker_v2 or awsecr)
Username
Password
As the required field values may vary depending on the type of registry and credential options, they will be covered in more depth below. A couple additional options are also provided:
Allow Self Signed
By default, the UI will only pull images from a TLS/SSL enabled registry. If the registry is protected with a self signed certificate or a certificate signed by an unknown certificate authority, you can enable this option by sliding the toggle to the right to instruct the UI not to validate the certificate.
Validate on Add
Credential validation is attempted by default upon registry addition although there may be cases where a credential can be valid but the validation routine can fail (in particular, credential validation methods are changing for public registries over time). Disabling this option by sliding the toggle to the left will instruct the UI to bypass the validation process.
Once a registry has been successfully configured, its credentials as well as the options mentioned above can be updated by clicking Edit under the Actions column. For more information on analyzing images with your newly defined registry, refer to: UI - Analyzing Images.
The instructions provided below for setting up the various registry types can also be seen inline by clicking ‘Need some help setting up your registry?’ near the bottom of the modal.
Docker V2 Registry
Regular docker v2 registries include dockerhub, quay.io, artifactory, docker registry v2 container, redhat public container registry, and many others. Generally, if you can execute a ‘docker login’ with a pair of credentials, Anchore can use those.
Registry
Hostname or IP of your registry endpoint, with an optional port
Ex: docker.io, mydocker.com:5000, 192.168.1.20:5000
Type
Set this to docker_v2
Username
Username that has access to the registry
Password
Password for the specified user
Amazon Elastic Container Registry (Amazon ECR)
Registry
The Amazon ECR endpoint hostname
Ex: 123456789012.dkr.ecr.us-east-1.amazonaws.com
Type
Set this to awsecr
For Username and Password, there are three different modes that require different settings when adding an Amazon ECR registry, depending on where your Anchore Enterprise is running and how your AWS IAM settings are configured to allow access to a given Amazon ECR registry.
API Keys
Provide access/secret keys from an account or IAM user. We highly recommend using a dedicated IAM user with specific access restrictions for this mode.
Username
AWS access key
Password
AWS secret key
Local Credentials
Uses the AWS credentials found in the local execution environment for Anchore Enterprise (Ex. env vars, ~/.aws/credentials, or instance profile).
Username
Set this to awsauto
Password
Set this to awsauto
Amazon ECR Assume Role
To have Anchore Enterprise assume a specific role different from the role it currently runs within, specify a different role ARN. Anchore Enterprise will use the execution role (as in iamauto mode from the instance/task profile) to assume a different role. The execution role must have permissions to assume the role requested.
When working with Google Container Registry, it is recommended that you use service account JSON keys rather than the short lived access tokens. Learn more about how to generate a JSON key here.
To use an Azure Registry, you can configure Anchore to use either the admin credential(s) or a service principal. Refer to Azure documentation for differences and how to setup each.
Registry
The login server
Ex. myregistry1.azurecr.io
Type
Set this to docker_v2
Admin Account
Username
The username in the ‘az acr credentials show –name ’ output
Password
The password or password2 value from the ‘az acr credentials show’ command result
Service Principal
Username
The service principal app id
To use a Harbor Registry, you will need to provide the Harbor registry URL, along with your Harbor username and password. Ensure the Type is set to docker_v2.
Registry
The login server
Ex. core.harbor.domain
Type
Set this to docker_v2
Harbor Log in
Username
The username you use to sign in to Harbor (e.g., admin).
Password
The password you use to log in to Harbor (e.g., Harbor12345).
3.3 - CI / CD Integration
Anchore Enterprise can be integrated into CI/CD systems such as Jenkins, GitHub, or GitLab to secure pipelines by adding automatic scanning.
If an artifact does not pass the policy checks then users can configure either a gating workflow which fails the build or allow the pipeline to continue with a warning to the build job owner. Notifications can be handled via the CI/CD system itself or using Anchore’s native notification system and can provide information about the CVEs discovered and the complete policy analysis. Images that pass the policy check can be promoted to the production registry.
There are two ways to use CI/CD with Anchore: Distributed Analysis or Centralised Analysis. Both modes work with any CI/CD system as long as the AnchoreCTL binary can be installed and run, or you can access the Enterprise APIs directly.
Distributed mode
The build job invokes a tool called AnchoreCTL locally on the CI/CD runner to generate both data and metadata about the artifact being scanned, such as source code or a container image, in the form of a software bill of materials (SBOM). The SBOM is then passed to Anchore Enterprise for analysis. The policy analysis can look for known CVEs, exposed secrets, incorrect configurations, licenses, and more.
Centralized mode
The build job will upload the container image to a repo and then request Anchore Enterprise pulls it down, generate the SBOM on the backend, and return the policy analysis result.
Requirements
Anchore Enterprise is deployed in your environment with the API accessible from your pipeline runner.
Centralized Mode: Credentials for your container registry are added to Anchore Enterprise, under the Anchore account that you intend to use with this pipeline. See Registries. For information on what registry/credentials must be added to allow Anchore Enterprise to access your container registry, refer to your container registry’s documentation.
Further Reading
To learn more about distributed and centralized modes, please review the Analyzing Images via CTL documentation.
3.3.1 - GitLab
Requirements
Anchore Enterprise is deployed in your environment, with the API accessible from your GitLab CI environment.
Credentials for your GitLab Container Registry are added to Anchore Enterprise, under the Anchore account that you intend to use with GitLab CI. See Registries. For information on what registry/credentials must be added to allow Anchore Enterprise to access your GitLab Container Registry, see https://docs.gitlab.com/ee/user/packages/container_registry/.
1. Configure Variables
Ensure that the following variables are set in your GitLab repository (settings -> CI/CD -> Variables -> Expand -> Add variable):
ANCHORECTL_USERNAME (protected)
ANCHORECTL_PASSWORD (protected and masked)
ANCHORECTL_URL (protected)
Note Gitlab has a minimum length of 8 for variables. Please ensure both your username and password meet this requirement.
2. Create config file
Create a new file in your repository. Name the file .gitlab-ci.yml.
3. Configure scanning mode
a) Distributed Mode
This is the most easily scalable method for scanning images. Distributed scanning uses the anchorectl utility to build the SBOM directly on the build runner and then pushes the SBOM to Anchore Enterprise through the API. To use this scanning method, paste the following workflow script into your new .gitlab-ci.yml file. After building the image from your Dockerfile and scanning it with anchorectl, this workflow will display vulnerabilities and policy results in the build log. After pasting, click “Commit changes” to save the new file.
This method uses the “analyzer” pods in the Anchore Enterprise deployment to build the SBOM. This can create queuing if there are not enough analyzer processes, and this method may require the operator to provide registry credentials in the Enterprise backend (if the images to be scanned are in private registries). This method may be preferred in cases where the Anchore Enterprise operator does not control the image build process (the analyzers can simply poll registries to look for new image builds as they are pushed), and this method also allows the operator to simply queue up the image for asynchronous scanning later if vulnerability and policy results are not required immediately. If the user wants malware scanning results from Anchore Enterprise’s clamav integration, the Centralized Scanning method is required. To use this scanning method, paste the following workflow script into your new .gitlab-ci.yml file. After building the image from your Dockerfile,, this workflow will tell Anchore Enterprise to scan the image, then it will display the vulnerability and policy results in the build log. After pasting, click “Commit changes” to save the new file.
Gitlab will automatically start a pipeline. Navigate to “Build” -> “Pipelines” and then on your running pipeline.
5. View output
Once the build is complete, click on the “anchore” stage and view the output of the job. You will see the results of the vulnerability match and policy evaluation in the output.
3.3.2 - GitHub
Image Scanning can be easily integrated into your GitHub Actions pipeline using anchorectl.
1. Configure Variables
Ensure that the following variables/secrets are set in your GitHub repository (repository settings -> secrets and variables -> actions):
Variable ANCHORECTL_URL
Variable ANCHORECTL_USERNAME
Secret ANCHORECTL_PASSWORD
These are necessary for the integration to access your Anchore Enterprise deployment. The ANCHORECTL_PASSWORD value should be created as a repository secret to prevent exposure of the value in job logs, while ANCHORECTL_URL and ANCHORECTL_USERNAME can be created as repository variables.
2. Configure Permissions
(“Settings” -> “Actions” -> “General” -> “Workflow permissions”) select “Read and write permissions” and click “Save”.
3. Create config file
In your repository, create a new file ( “Add file” -> “Create new file”) and name it .github/workflows/anchorectl.yaml.
4. Set scanning mode
a) Distributed Mode
This is the most easily scalable method for scanning images. Distributed scanning uses the anchorectl utility to build the SBOM directly on the build runner and then pushes the SBOM to Anchore Enterprise through the API. To use this scanning method, paste the following workflow script into your new anchorectl.yaml file. After building the image from your Dockerfile and scanning it with anchorectl, this workflow will display vulnerabilities and policy results in the build log.
name: Anchore Enterprise Distributed Scan
on:
workflow_dispatch:
inputs:
mode:
description: 'On-Demand Build'
env:
ANCHORECTL_URL: ${{ vars.ANCHORECTL_URL }}
ANCHORECTL_USERNAME: ${{ vars.ANCHORECTL_USERNAME }}
ANCHORECTL_PASSWORD: ${{ secrets.ANCHORECTL_PASSWORD }}
## set ANCHORECTL_FAIL_BASED_ON_RESULTS to true if you want to break the pipeline based on the evaluation
ANCHORECTL_FAIL_BASED_ON_RESULTS: false
REGISTRY: ghcr.io
jobs:
Build:
runs-on: ubuntu-latest
steps:
- name: "Set IMAGE environmental variables"
run: |
echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
- name: Checkout Code
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: build local container
uses: docker/build-push-action@v3
with:
tags: ${{ env.IMAGE }}
push: true
load: false
Anchore:
runs-on: ubuntu-latest
needs: Build
steps:
- name: "Set IMAGE environmental variables"
run: |
echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
- name: Checkout Code
### only need to do this if you want to pass the dockerfile to Anchore during scanning
uses: actions/checkout@v3
- name: Install Latest anchorectl Binary
run: |
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin v1.6.0
export PATH="${HOME}/.local/bin/:${PATH}"
- name: Generate SBOM and Push to Anchore
run: |
anchorectl image add --no-auto-subscribe --wait --from registry --dockerfile Dockerfile ${IMAGE}
- name: Pull Vulnerability List
run: |
anchorectl image vulnerabilities ${IMAGE}
- name: Pull Policy Evaluation
run: |
# set "ANCHORECTL_FAIL_BASED_ON_RESULTS=true" (see above in the "env:" section) to break the pipeline here if the
# policy evaluation returns FAIL or add -f, --fail-based-on-results to this command for the same result
#
anchorectl image check --detail ${IMAGE}
b) Centralized Mode
This method uses the “analyzer” pods in the Anchore Enterprise deployment to build the SBOM. This can create queuing if there are not enough analyzer processes, and this method may require the operator to provide registry credentials in the Enterprise backend (if the images to be scanned are in private registries). This method may be preferred in cases where the Anchore Enterprise operator does not control the image build process (the analyzers can simply poll registries to look for new image builds as they are pushed), and this method also allows the operator to simply queue up the image for asynchronous scanning later if vulnerability and policy results are not required immediately. If the user wants malware scanning results from Anchore Enterprise’s clamav integration, the Centralized Scanning method is required. To use this scanning method, paste the following workflow script into your new anchorectl.yaml file. After building the image from your Dockerfile,, this workflow will tell Anchore Enterprise to scan the image, then it will display the vulnerability and policy results in the build log.
name: Anchore Enterprise Centralized Scan
on:
workflow_dispatch:
inputs:
mode:
description: 'On-Demand Build'
env:
ANCHORECTL_URL: ${{ vars.ANCHORECTL_URL }}
ANCHORECTL_USERNAME: ${{ vars.ANCHORECTL_USERNAME }}
ANCHORECTL_PASSWORD: ${{ secrets.ANCHORECTL_PASSWORD }}
## set ANCHORECTL_FAIL_BASED_ON_RESULTS to true if you want to break the pipeline based on the evaluation
ANCHORECTL_FAIL_BASED_ON_RESULTS: false
REGISTRY: ghcr.io
jobs:
Build:
runs-on: ubuntu-latest
steps:
- name: "Set IMAGE environmental variables"
run: |
echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
- name: Checkout Code
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: build local container
uses: docker/build-push-action@v3
with:
tags: ${{ env.IMAGE }}
push: true
load: false
Anchore:
runs-on: ubuntu-latest
needs: Build
steps:
- name: "Set IMAGE environmental variables"
run: |
echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV
- name: Checkout Code
uses: actions/checkout@v3
- name: Install Latest anchorectl Binary
run: |
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin
export PATH="${HOME}/.local/bin/:${PATH}"
- name: Queue Image for Scanning by Anchore Enterprise
run: |
anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile ${IMAGE}
- name: Pull Vulnerability List
run: |
anchorectl image vulnerabilities ${IMAGE}
- name: Pull Policy Evaluation
run: |
# set "ANCHORECTL_FAIL_BASED_ON_RESULTS=true" (see above in the "env:" section) to break the pipeline here if the
# policy evaluation returns FAIL or add -f, --fail-based-on-results to this command for the same result
#
anchorectl image check --detail ${IMAGE}
5. Run Workflow
Go to “Actions” -> “Anchore Enterprise with anchorectl” and hit “Run workflow”.
6. View Results
When the workflow completes, view the results by clicking on the workflow name (“Anchore Enterprise with anchorectl”), then on the job (“Anchore”), then expand the “Pull Vulnerability List” and/or “Pull Policy Evaluation” steps to see the details.
7. Notifications
You can also integrate your Anchore deployment with the GitHub API so that Anchore notifications are sent to GitHub Notifications as new issues in a repository.
Ensure that the following credentials are set in in your Jenkins instance (Dashboard -> Manage Jenkins -> Credentials) as credential type “secret text”:
These are necessary for the integration to access your Anchore Enterprise deployment. The ANCHORECTL_PASSWORD value should be created as a repository secret to prevent exposure of the value in job logs, while ANCHORECTL_URL and ANCHORECTL_USERNAME can be created as repository variables.
2. Configure scanning mode
a) Distributed
This is the most easily scalable method for scanning images. Distributed scanning uses the anchorectl utility to build the SBOM directly on the build runner and then pushes the SBOM to Anchore Enterprise through the API. To use this scanning method, paste the following stage anywhere after your target container image has been built:
stage('Analyze Image w/ anchorectl') {
environment {
ANCHORECTL_URL = credentials("Anchorectl_Url")
ANCHORECTL_USERNAME = credentials("Anchorectl_Username")
ANCHORECTL_PASSWORD = credentials("Anchorectl_Password")
// change ANCHORECTL_FAIL_BASED_ON_RESULTS to "true" if you want to break on policy violations
ANCHORECTL_FAIL_BASED_ON_RESULTS = "false"
}
steps {
script {
sh """
### install latest anchorectl
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b $HOME/.local/bin
export PATH="$HOME/.local/bin/:$PATH"
#
### actually add the image to the queue to be scanned
#
### --wait tells anchorectl to block until the scan
### is complete (this isn't always necessary but if
### you want to pull the vulnerability list and/or
### policy report, you need to wait
#
anchorectl image add --wait --from registry ${REGISTRY}/${REPOSITORY}:${TAG}
#
### pull vulnerability list (optional)
anchorectl image vulnerabilities ${REGISTRY}/${REPOSITORY}:${TAG}
###
### check policy evaluation (omit –detail if you just
### want a pass/fail determination)
anchorectl image check --detail ${REGISTRY}/${REPOSITORY}:${TAG}
###
### if you want to break the pipeline on a policy violation, add "--fail-based-on-results"
### or change the ANCHORECTL_FAIL_BASE_ON_RESULTS variable above to "true"
"""
} // end script
} // end steps
} // end stage "analyze with anchorectl"
b ) Centralized
Centralized Scanning: this method uses the “analyzer” pods in the Anchore Enterprise deployment to build the SBOM. This can create queuing if there are not enough analyzer processes, and this method may require the operator to provide registry credentials in the Enterprise backend (if the images to be scanned are in private registries). This method may be preferred in cases where the Anchore Enterprise operator does not control the image build process (the analyzers can simply poll registries to look for new image builds as they are pushed), and this method also allows the operator to simply queue up the image for asynchronous scanning later if vulnerability and policy results are not required immediately. If the user wants malware scanning results from Anchore Enterprise’s clamav integration, the Centralized Scanning method is required. To use this scanning method, paste the following stage anywhere after your target container image has been built. After building the image from your Dockerfile, this stage will tell Anchore Enterprise to scan the image, then it will display the vulnerability and policy results in the build log.
stage('Analyze Image w/ anchorectl') {
environment {
ANCHORECTL_URL = credentials("Anchorectl_Url")
ANCHORECTL_USERNAME = credentials("Anchorectl_Username")
ANCHORECTL_PASSWORD = credentials("Anchorectl_Password")
// change ANCHORECTL_FAIL_BASED_ON_RESULTS to "true" if you want to break on policy violations
ANCHORECTL_FAIL_BASED_ON_RESULTS = "false"
}
steps {
script {
sh """
### install latest anchorectl
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b $HOME/.local/bin
export PATH="$HOME/.local/bin/:$PATH"
#
### actually add the image to the queue to be scanned
#
### --wait tells anchorectl to block until the scan
### is complete (this isn't always necessary but if
### you want to pull the vulnerability list and/or
### policy report, you need to wait
#
anchorectl image add --wait ${REGISTRY}/${REPOSITORY}:${TAG}
#
### pull vulnerability list (optional)
anchorectl image vulnerabilities ${REGISTRY}/${REPOSITORY}:${TAG}
###
### check policy evaluation (omit –detail if you just
### want a pass/fail determination)
anchorectl image check --detail ${REGISTRY}/${REPOSITORY}:${TAG}
###
### if you want to break the pipeline on a policy violation, add "--fail-based-on-results"
### or change the ANCHORECTL_FAIL_BASE_ON_RESULTS variable above to "true"
"""
} // end script
} // end steps
} // end stage "analyze with anchorectl"
Kubernetes can be configured to use an Admission Controller to validate that the container image is compliant with the user’s policy before allowing or preventing deployment.
Anchore Enterprise can be integrated with Kubernetes to ensure that only certified images are started within a cluster. The admission controller can be configured to make a webhook call into Anchore Enterprise. Anchore Enterprise exports a Kubernetes-specific API endpoint and will return the pass or fail response in the form of an ImageReview response. This approach allows the Kubernetes system to make the final decision on running a container image and does not require installation of any per-node plugins into Kubernetes.
Using native Kubernetes features allows the admission controller approach to be used in both on-prem and cloud-hosted Kubernetes environments.
Getting Started
Full information on installation and configuration of the Anchore Kubernetes Admission Controller can be found here.
Note
The Anchore Kubernetes Admission Controller is a licensed add-on, please make sure you have a valid runtime license entitlement.
Modes of Operation
The Anchore admission controller supports 3 different modes of operation allowing you to tune the tradeoff between control and intrusiveness for your environments.
Strict Policy-Based Admission Gating Mode
This is the strictest mode, and will admit only images that are already analyzed by Anchore and receive a “pass” on policy evaluation. This enables you to ensure, for example, that no image is deployed into the cluster that has a known high-severity CVE with an available fix, or any of several other conditions. Anchore’s policy language supports sophisticated conditions on the properties of images, vulnerabilities, and metadata.
Analysis-Based Admission Gating Mode
Admit only images that are analyzed and known to Anchore, but do not execute or require a policy evaluation. This is useful in cases where you’d like to enforce requirement that all images be deployed via a CI/CD pipeline, for example, that itself manages the image scanning with Anchore, but allowing the CI/CD process to determine what should run based on other factors outside the context of the image or k8s itself.
Passive Analysis Trigger Mode
Trigger an Anchore analysis of images, but to no block execution on analysis completion or policy evaluation of the image. This is a way to ensure that all images that make it to deployment (test, staging, or prod) are guaranteed to have some form of analysis audit trail available and a presence in reports and notifications that are managed by Anchore. Image records in Anchore are given an annotation of “requestor=anchore-admission-controller” to help track their provenance.
3.4.2 - Kubernetes Runtime Inventory
Anchore uses a go binary called anchore-k8s-inventory that leverages the Kubernetes Go SDK
to reach out and list containers in a configurable set of namespaces to determine which images are running.
anchore-k8s-inventory can be deployed via its helm chart, embedded within your Kubernetes cluster as an agent. It will require access to the Anchore API.
Note
The Anchore Kubernetes Inventory Agent is a licensed add-on, please make sure you have a valid runtime license entitlement.
Getting Started
The most common way to track inventory is to install anchore-k8s-inventory as an agent in your cluster. To do this you will need to configure credentials
and information about your deployment in the values file. It is recommended to first configure a specific robot user
for the account where you’ll want to track your Kubernetes inventory.
As an agent anchore-k8s-inventory is installed using helm and the helm chart is hosted as part of the https://charts.anchore.io repo.
It is based on the anchore/k8s-inventory docker image.
To install the helm chart, follow these steps:
Configure your username, password, Anchore account, Anchore URL and cluster name in the values file.
k8sInventory:
# Path should not be changed, cluster value is used to tell Anchore which cluster this inventory is coming from
kubeconfig:
cluster: <unique-name-for-your-cluster>
anchoreRegistration:
#RegistrationId: ""
IntegrationName: "<unique-name-for-your-cluster>"
IntegrationDescription: ""
anchore:
url: <URL for your>
# Note: recommend using the inventory-agent role
user: <user>
password: <password>
account: <account>
Run helm install in the cluster(s) you wish to track
anchore-k8s-inventory must be able to resolve the Anchore URL and requires API credentials. Review the anchore-k8s-inventory logs if you are not able to see the inventory results in the UI.
Note: the Anchore API Password can be provided via a Kubernetes secret, or injected into the environment of the anchore-k8s-inventory container
For injecting the environment variable, see: injectSecretsViaEnv
For providing your own secret for the Anchore API Password, see: useExistingSecret. K8s Inventory creates it’s own secret based on your values.yaml file for key k8sInventory.anchore.password, but the k8sInventory.useExistingSecret key allows you to create your own secret and provide it in the values file. See the K8s Inventory repo for more information about the K8s Inventory specific configuration
Usage
To verify that you are tracking Kubernetes Inventory you can access inventory results with the command anchorectl inventory list and look for results where the TYPE is kubernetes.
The UI also displays the Kubernetes Inventory and allows operators to visually navigate the images, vulnerability results, and see the results of the policy evaluation.
For more details about watching clusters, and reviewing policy results see the Using Kubernetes Inventory section.
Anchore uses a go binary called anchore-ecs-inventory that leverages the AWS Go SDK
to gather an inventory of containers and their images running on Amazon ECS and report back to Anchore.
The Amazon ECS Inventory Agent can be installed via Helm Chart or as an ECS task definition.
Note
The Anchore Amazon ECS Inventory Agent is a licensed add-on, please make sure you have a valid runtime license entitlement.
Getting Started via Helm
You can install the chart via the Anchore repository:
A basic values file can be found here. The key configurations are in the ecsInventory section.
Anchore ECS Inventory creates it’s own secret based on your values.yaml file for the following keys that are required for successfully deploying and connecting the ecs-inventory service to the Anchore Platform and Amazon ECS Service:
ecsInventory.awsAccessKeyId
ecsInventory.awsSecretAccessKey
Using your own secrets
The (ecsInventory.useExistingSecret and ecsInventory.existingSecretName) or ecsInventory.injectSecretsViaEnv keys allows you to create your own secret and provide it in the values file or place the required secret into the pod via different means such as injecting the secrets into the pod using hashicorp vault.
It is also possible to deploy the ecs-inventory container on Amazon ECS. Here is an sample task definition that could be used to deploy ecs-inventory with a default configuration:
To verify that you are tracking Amazon ECS inventory you can access inventory results with the command anchorectl inventory list and look for results where the TYPE is ecs.
Auto analyze new inventory
It is possible to create a subscription to watch for new Amazon ECS inventory that is reported to Anchore and automatically schedule those images for
analysis.
1. Create the subscription
A subscription can be created by sending a POST to /v1/subscriptions with the following payload:
Use anchorectl to generate a software bill of materials (SBOM) and import a source repository artifact from a file location on disk. You can also get information about the source repository, investigate vulnerability packages by requesting vulnerabilities for a single analyzed source repository, or get any policy evaluations.
The workflow would generally be as follows.
Generate an SBOM. The format is similar to the following:
syft <path> -o json > <resulting filename>.json
For example:
Import the SBOM from a source with metadata. This would normally occur as part of a CI/CD pipeline, and the various metadata would be programmatically added via environment variables. The response from anchorectl includes the new ID of the Source in Anchore Enterprise. For example:
List the source repositories that you have sent to Anchore Enterprise. This command will allow the operator to list all available source repositories within the system and their current status.
# anchorectl source list
✔ Fetched sources
┌──────────────────────────────────────┬────────────┬─────────────────────┬──────────────────────────────────────────┬─────────────────┬───────────────┐
│ UUID │ HOST │ REPOSITORY │ REVISION │ ANALYSIS STATUS │ SOURCE STATUS │
├──────────────────────────────────────┼────────────┼─────────────────────┼──────────────────────────────────────────┼─────────────────┼───────────────┤
│ fa416998-59fa-44f7-8672-dc267385e799 │ github.com │ my-project │ 12345 │ analyzed │ active │
└──────────────────────────────────────┴────────────┴─────────────────────┴──────────────────────────────────────────┴─────────────────┴───────────────┘
Fetch the uploaded SBOM for a source repository from Anchore Enterprise.
The for this command is taken from the UUID(s) of the listed source repositories.
Use anchorectl to investigate vulnerability packages by requesting vulnerabilities for a single analyzed source repository. You can choose os, non-os, or all. For example:
Anchore Enterprise supports integration with the ServiceNow Container Vulnerability Response (CVR) module. This integration allows CVR to collect data from Anchore’s APIs about vulnerabilities and create the associated CVITs. For more information about the ServiceNOW CVR module please consult the formal documentation at ServiceNow’s website
For information about how to integrate ServiceNOW with Anchore and to get access to the plugin for use with your ServiceNow platform, please contact Anchore Support
3.8 - Harbor Scanner Adapter
Harbor Scanner Adapter
Harbor is an open-source, cloud-native registry that helps manage and secure container images. It integrates seamlessly with Anchore for vulnerability scanning and management.
You can add Harbor as a docker v2 registry, see Harbor registry. BUT for a deepper integration you can use the Harbor Adapter scanner, which will coordinate registry access and let Harbor issue scans.
The Harbor Scanner Adapter is a component that integrates Anchore with Harbor. It acts as a bridge between Harbor and Anchore, enabling Harbor to perform container image vulnerability scans using Anchore.
For information on deploying Harbor, see the Harbor Project.
3.8.1 - Adapter Installation and Configuration
Integrating Harbor
The Harbor Scanner Adapter for Anchore can be used to integrate Harbor with Anchore Enterprise. This scanner provides a gateway for Harbor to communicate with your Anchore Enterprise deployment thereby making it possible for jobs to be scheduled for scans through Harbor.
The adapter’s configuration can be customized using environment variables defined in the harbor-adapter-anchore.yaml.
You can edit this file to adjust the environment variables as needed to fit your deployment. You must configure how the adapter connects to Anchore. The following variables are compulsory to be configured:
Note: It is highly recommended that you create a new account in the Anchore deployment and a new user with credentials dedicated to the Harbor adapter. When using Enterprise 5+, you can also utilize api keys. Learn how to generate them here
For full Harbor Adapter configuration options, see here
Once you have edited the value file, use the updated file to deploy the Harbor Scanner Adapter by executing:
kubectl apply -f harbor-adapter-anchore.yaml
Once the adapter has been configured as shown above, you will need to add Anchore as the default scanner in Harbor.
Adding Anchore as default scanner
Setting Anchore as the default scanner in Harbor ensures that all image scans, unless specified otherwise, are automatically sent to your Anchore deployment for scanning. Follow the steps below to add Anchore as a scanner and set it as the default:
In the Harbor UI login as an admin and navigate to Administration->Interrogation Services->Scanners and click “+ New Scanner”. In older versions of Harbor, this can be found under Configuration->Scanners.
In ‘Endpoint’, use the adapter hostname/url. The default is the following:
http://harbor-scanner-anchore:8080
Leave the authorization field empty, as no API key was set in the adapter deployment environment for this example.
Please untick use internal registry address. Anchore could have issues accessing the Harbor registry otherwise
Click “Test Connection” to verify the connection. Then, click “Add” to add the scanner.
Now to ensure all projects in Harbor makes use of the newly configured Anchore scanner, you must make the Anchore scanner your default Scanner. In the Harbor UI, navigate to the project->scanner and click “Select Scanner” click on the radio button next to the selected Anchore Scanner to make it the default scanner.
Configuring Timeouts
Since Harbor and Anchore are separate systems, an API call is needed for communication between them. As a result, configuring timeouts may be necessary depending on factors such as your network, the proximity of the two systems, and overall latency.
The ANCHORE_CLIENT_TIMEOUT_SECONDS setting determines the timeout duration (in seconds) for API calls from the Harbor Adapter to the Anchore service. By default, it is set to 60 seconds. If the API call to Anchore exceeds this time, the scan may fail or be delayed. A shorter timeout can result in more frequent timeouts during scans, especially if the system is under heavy load or if Anchore’s response time is slower than expected.
The proximity of Anchore to the registry also plays a crucial role in scan performance. If Anchore is geographically distant or on a separate network from the registry, network latency could increase, leading to slower scan times or potential timeouts. Keeping Anchore close to the registry in terms of network topology can reduce latency, improving scan efficiency and reducing the likelihood of timeouts.
To increase the ANCHORE_CLIENT_TIMEOUT_SECONDS, set the environment variable in your harbor-adapter-anchore.yaml file and reaply it.
You can now see the pushed image in the Harbor UI by Navigating to the project under the project menu
Initiate a Vulnerability Scan
To scan your image for vulnerabilities select the image from the repository list. Click SCAN VULNERABILITY under the Actions menu:
During integration you will have configured Anchore Enterprise as your default scanner. This means vulnerability scan requests will be sent to your Anchore Enterprise deployment. Once the scan is complete, the results will appear in both Harbor and the Anchore Enterprise UI. You can view details about the vulnerabilities, including severity and remediation options.
Scheduling a Vulnerability Scan
Harbor allows you to schedule automated vulnerability scans on your container images. These scans can be performed using the configured scanner (Anchore Enterprise) and will help identify vulnerabilities within the images.
Navigate to Interrogation Services. Under the Vulnerability tab you will see options on scheduling scans (Hourly, daily, weekly or custom). You can also initiate scan of all your images immediately by clicking the SCAN NOW button.
Information regarding scan in progress will be provided on this page.
It is important to note that weekly scans can take time, especially if you have many images. Anchore Enterprise will fetch the latest vulnerability results only if it hasn’t scanned the image before since it caches images it has previously seen. This helps to reduce the overal time required for weekly scans. Additionally, number of analyzers, network latency and timeouts can impact the time taken for a weekly scan to complete.
Enable Image Scanning on Push
By enabling the Scan on Push option under the project’s configuration, Harbor will automatically scan any new images pushed to the project, helping you identify and manage potential security risks efficiently. To enable this. Navigate to the desired project -> configuration and look for the option vulnerability scanning as shown in the picture
Prevent vulnerable images from running
To prevent vulnerable images from being pulled and run, you can set up a policy which uses the last known vulnerability results.
Please note: Anchore is still able to pull images to conduct scans.
To do this, navigate to the desired Project -> Configuration and enable the Vulnerability Scanning option
Locate the Deployment Security option, enable it, and choose the severity level to enforce.
Adding Proxy Registries
Harbor has the ability to act as a proxy registry linking to preconfigured upstream registries like DockerHub. This allows users to pull images from Harbor directly which in turn using pre configured credentials pulls and caches the images from an upstream source.
Use Case:
A common use case is that customers want to restrict registry access in a production and/or secure environment to only their Harbor registry and as such Anchore’s own Enterprise images are published and accessible via DockerHub and Iron Bank which might not be accessible. To resolve this, you can setup a proxy cache registry in Harbor and then pull the image from your Harbor deployment.
Don’t forget you can also configure your Anchore Enterprise values.yaml file so that your deployment will pull the images from your private Harbor registry
Finally, an added benefit is that you have a local copy of the Anchore Enterprise Images rather than relying on a public services such as DockerHub or Iron Bank.
Debugging scan issues
When image scanning fails in Harbor using Anchore, it’s important to review logs from three key components: Harbor, the Anchore Adapter, and Anchore Enterprise. Collecting these logs and generating a support bundle can help diagnose the issue. You can then share this information with the Anchore Customer Success team for further assistance.
For Anchore Enterprise, follow instructions here to generate a support bundle
3.9 - DefectDojo
DefectDojo
DefectDojo is an open source application vulnerability management platform that streamlines the handling of security findings from various tools, including seamless integration with Anchore Enterprise.
Anchore Enterprise vulnerability and policy reports, whether obtained through the UI or using anchorectl, can be seamlessly parsed and imported into DefectDojo for centralized vulnerability management.
Importing Anchore Enterprise analysis Data into DefectDojo
You can obtain vulnerability and policy evaluation reports from Anchore Enterprise through:
The Anchore Enterprise UI
The anchorectl
The Anchore API (for automation workflows)
The downloaded reports can be uploaded to DefectDojo by selecting the appropriate parser during the import process. For more details on available DefectDojo and Anchore parsers, see: DefectDojo Integration.
Downloading Vulnerability report from Anchore UI
To download vulnerability report data from Anchore UI
Click on the “Images” icon
Select the image tag for which you want to download the vulnerability data.
Now navigate to the “Vulnerabilities” section, Click on “Vulnerability Report” to download the report.
Download the report in JSON format, then proceed to import it into DefectDojo.
Downloading Vulnerability and Policy report via anchorectl
To download vulnerability report using anchorectl run the following:
For more details on how to automate this process using DefectDojo API, see: DefectDojo API usage.
4 - Configuring Anchore
Configuring Anchore Enterprise starts with configuring each of the core services. Anchore Enterprise deployments using docker compose for trials or Helm for production are designed to run by default with no modifications necessary to get started. Although, many options are available to tune your production deployment to fit your needs.
About Configuring Anchore Enterprise
All system services (except the UI, which has its own configuration) require a single configuration which is read from /config/config.yaml when each service starts up. Settings in this file are mostly related to static settings that are fundamental to the deployment of system services. They are most often updated when the system is being initially tuned for a deployment. They may, infrequently, need to be updated after they have been set as appropriate for any given deployment.
Sections of the UI configuration settings can now be found from within the UI itself! Navigate to the ‘System’ heading in the sidebar and then select ‘Configuration’. Here you will see some of the exposed configuration options:
By default, Anchore Enterprise includes a config.yaml that is functional out-of-the-box, with some parameters set to an environment variable for common site-specific settings. These settings are then set either in docker-compose.yaml, by the Helm chart, or as appropriate for other orchestration/deployment tools.
When deploying Anchore Enterprise using the Helm chart, you can configure it by modifying the anchoreConfig section in your values file. This section corresponds to the default config.yaml file included in the Anchore Enterprise container image. The values file serves to override the default configurations and should be modified to suit your deployment.
A single configuration file config.yaml is required to run Anchore - by default, this file is embedded in the Enterprise container image, located in /config/config.yaml. The default configuration file is provided as a way to get started, which is functional out of the box, without modification, when combined with either the Helm method or docker compose method of installing Enterprise. The default configuration is set up to use environment variable substitutions so that configuration values can be controlled by setting the corresponding environment variables at deployment time (see Using Environment Variables in Anchore.
Each environment variable (starting with ANCHORE_) in the default config.yaml is set (either the baseline as set in the Dockerfile, or an override in docker compose or Helm or through a system default) to ensure that the system comes up with a fully populated configuration.
Some examples of useful initial settings follow.
Default admin credentials: A default admin email and password is required to be defined in the catalog service for the initial bootstrap of enterprise to succeed, which are both set through the default config file using the ANCHORE_ADMIN_PASSWORD and ANCHORE_ADMIN_EMAIL environment variables respectively. The system defines a default email admin@myanchore, but does not define a default password. If using the default config file, the user must set a value for ANCHORE_ADMIN_PASSWORD in order to succeed the initial bootstrap of the system. To set the default password or to override the default email, simply add overrides for ANCHORE_ADMIN_PASSWORD and ANCHORE_ADMIN_EMAIL environment variables, set to your preferred values prior to deploying Anchore Enterprise. After the initial bootstrap, this can be removed if desired.
Log level: Anchore Enterprise is configured to run at the INFO log level by default. The full set of options are CRITICAL, ERROR, WARNING, INFO, and DEBUG (in ascending order of log output verbosity). Admin accounts can set the system log level through the dashboard, the dashboard allows differing per-service log levels. Changes through the dashboard do not require a system restart. The log level can also be set globally (across all services) with an override for ANCHORE_LOG_LEVEL prior to deploying Anchore Enterprise, this approach requires a system restart to take effect.
log_level:'${ANCHORE_LOG_LEVEL}'
Postgres Database: Anchore Enterprise requires access to a PostgreSQL database to operate. The database can be run as a container with a persistent volume or outside of your container environment (which is set up automatically if the example docker-compose.yaml is used). If you wish to use an external Postgres Database, the elements of the connection string in the config.yaml can be specified as environment variable overrides. The default configuration is set up to connect to a postgres DB that is deployed alongside Anchore Enterprise Services when using docker-compose or Helm, to the internal host anchore-db on port 5432 using username postgres with password mysecretpassword and db postgres. If an external database service is being used then you will need to provide the use, password, host, port and DB name environment variables, as shown below.
While Anchore Enterprise is set up to run out of the box without modifications, and many useful values can be overriden using environment variables as described above, one can always opt to have full control over the configuration by providing a config.yaml file explicitly, typically by generating the file and making it available from an external mount/configmap/etc. at deployment time. A good method to start if you wish to provide your own config.yaml is to extract the default config.yaml from the Anchore Enterprise container image, modify it, and then override the embedded /config/config.yaml at deployment time. For example:
Extract the default config file from the anchore/enterprise container image:
Set up your deployment to override the embedded /config/config.yaml at run time (below example shows how to achieve this with docker compose). Edit the docker-compose.yaml to include a volume mount that mounts your my_config.yaml over the embedded /config/config.yaml, resulting in a volume section for each Anchore Enterprise service definition.
Now, each service will come up with your external my_config.yaml mounted over the embedded /config/config.yaml.
4.1.1 - API Accessible Configuration
Anchore Enterprise provides the ability to view the majority of system configurations via the API.
A select number of configurations can also be modified through the API, or through the System Configuration tab from the UI.
This enables control of system behaviours and characteristics in a programmatic way without needing to restart your deployment.
Note
⚠️ At this time, only a few configurations are modifiable through this new API-backed mechanism yet. Full configuration continues to be achievable through the existing config file mechanism.
Configuration Provenance
Configuration values are established by a priority system which derives the final running value based on the location from
which the value was read. This provenance priority mechanism enables backwards compatibility and increased security
for customers wishing tighter control of their systems runtime.
1. Config File Source
Any configuration variable which is declared within an on-disk configuration file will be honored first (ie. via a Helm values.yaml).
These configuration variables will be read-only through the API and UI. Any configuration that is currently set for your deployment
will be preserved.
Note
⚠️ The use of environment variables for configuration values are treated with the same provenance as config file.
2. API Config Source
When a configuration variable is not declared in a config file and is one of the select few that are allowed to be modified via the API,
its value will be stored in the system database. These configuration variables can be modified via the API and the changes will be reflected
within the deployment within a short time without having to restart your deployment.
The following configuration variables are modifiable through the API:
If a configuration variable has not been declared by any other source it inherits the default system value. These are values chosen by Anchore
as safe and functional defaults.
For the majority of deployments we recommend you simply use the default settings for most values as they have been calibrated to
offer the best experience out of the box.
⚠️ Note: These values may change from release to release. Notable changes will be communicated through Release Notes. Significant
system behaviour changes will not be performed between minor releases. If you wish to ensure a default is not changed, “pin” the
value either by providing it explicitly in the config file or setting it directly via the API (when available).
Blocked by Config File
Attempts to set a configurations through the API may return a blocked_by_config_file error.
This error is raised when a configuration variable is not API editable as it is set in a config file.
Locations to inspect for a pinned config value are:
the Helm attached values.yaml file (continue reading for further instructions)
the compose attached config.yaml file
the environment variables being passed into the container
Disabling Helm Chart Configs
For Helm users, to make a value configurable through the API, set the value in the attached values.yaml to "<ALLOW_API_CONFIGURATION>"
Each running container will refresh its internal configuration state approximately every 30 seconds. After making a change please allow
a minute to elapse for the changes to flush out to all running nodes.
Config File Configurations Refresh
Changes made to a mounted config file will be detected and will cause a system configuration refresh.
If a configuration variable that is changed requires a system restart a log message will be printed to that effect.
If a system is pending restart, none of the changes will take effect until
this is performed. This config file watcher includes a mounted .env file if in use.
Config Validation
In a future major release of Anchore Enterprise, we will be enabling a strict mode type and data validation for the configuration.
This will result in any non-compliant Enterprise deployments failing to boot. For now this is not enforced.
For awareness, validation of your configuration variables and values will occur during system boot.
If any configuration fails this validation, a log message mentioning lenient mode will be published.
This will be accompanied by details on which entries have failed.
To avoid system downtime in a future major release, consider resolving these issues today.
Secret Values
Secret values cannot be read through the API, they will return as the string <redacted> it is however possible to update them.
It is not possible to set any secret value to the string <redacted>.
Security Permissions
Config file permissions have not changed, they are editable by the system deployer.
API requests to make changes to configuration will only be accepted if coming from an Anchore User with system-admin permissions.
4.1.2 - Environment Variables
Environment variable references may be used in the Anchore config.yaml file to set values that need to be configurable during deployment.
Using this mechanism a common configuration file can be used with multiple Anchore instances with key values being passed using environment variables.
The config.yaml configuration file is read by Anchore and any references to variables prefixed with ANCHORE will be replaced by the value of the matching environment variable.
For example in the sample configuration file the host_id parameter is set be appending the ANCHORE_HOST_ID variable to the string dockerhostid
host_id: 'dockerhostid-${ANCHORE_HOST_ID}'
Notes:
Only variables prefixed with ANCHORE will be replaced
If an environment variable is referenced in the configuration file but not set in the environment then a warning will be logged
It is recommend to use curly braces, for example ${ANCHORE_PARAM} to avoid potentially ambiguous cases
Passing Environment Variables as a File
Environment Variables may also be passed as a file contained key value pairs.
The system will check for an environment variable named ANCHORE_ENV_FILE if this variable is set then Anchore will attempt to read a file at the location specified in this variable.
The Anchore environment file is read before any other Anchore environment variables so any ANCHORE variables passed in the environment will override the values set in the environment file.
4.2 - Enterprise UI Configuration
The Enterprise UI service has some static configuration options that are read
from /config/config-ui.yaml inside the UI container image when the system
starts up.
The configuration is designed to not require any modification when using the
quickstart (docker compose) or production (Helm) methods of deploying Anchore
Enterprise. If modifications are desired, the options, their meanings, and
environment overrides are listed below for reference:
The (required) license_path key specifies the location of the local system
folder containing the license.yaml license file required by the Anchore
Enterprise UI web service for product activation. This value can be overridden
by using the ANCHORE_LICENSE_PATH environment variable.
license_path: '/'
The (required) enterprise_uri key specifies the address of the Anchore
Enterprise service. The value must be a string containing a properly-formed
‘http’ or ‘https’ URI. This value can be overridden by using the
ANCHORE_ENTERPRISE_URI environment variable.
enterprise_uri: 'http://api:8228/v2'
The (required) redis_uri key specifies the address of the Redis service. The
value must be a string containing a properly-formed ‘http’, ‘https’, or
redis URI. Note that the default configuration uses the REdis Serialization
Protocol (RESP). This value can be overridden by using the ANCHORE_REDIS_URI
environment variable.
redis_uri: 'redis://ui-redis:6379'
The (required) appdb_uri key specifies the location and credentials for the
postgres DB endpoint used by the UI. The value must contain the host, port, DB
user, DB password, and DB name. This value can be overridden by using the
ANCHORE_APPDB_URI environment variable.
The (required) reports_uri key specifies the address of the Reports service.
The value must be a string containing a properly-formed ‘http’ or ‘https’ URI
and can be overridden by using the ANCHORE_REPORTS_URI environment variable.
Note that the presence of an uncommented reports_uri key in this file (even
if unset, or set with an invalid value) instructs the Anchore Enterprise UI
web service that the Reports feature must be enabled.
reports_uri: 'http://reports:8228/v2'
The (optional) enable_ssl key specifies if SSL operations should be enabled
within in the web app runtime. When this value is set to True, secure
cookies will be used with a SameSite value of None. The value must be a
Boolean, and defaults to False if unset.
Note: Only enable this property if your UI deployment configured to run
within an SSL-enabled environment (for example, behind a reverse proxy, in the
presence of signed certs etc.)
This value can be overridden by using the ANCHORE_ENABLE_SSL environment
variable.
enable_ssl: False
The (optional) enable_proxy key specifies whether to trust a reverse proxy
when setting secure cookies (via the X-Forwarded-Proto header). The value
must be a Boolean, and defaults to False if unset. In addition, SSL must be
enabled for this to work. This value can be overridden by using the
ANCHORE_ENABLE_PROXY environment variable.
enable_proxy: False
The (optional) allow_shared_login key specifies if a single set of user
credentials can be used to start multiple Anchore Enterprise UI sessions; for
example, by multiple users across different systems, or by a single user on a
single system across multiple browsers.
When set to False, only one session per credential is permitted at a time,
and logging in will invalidate any other sessions that are using the same set
of credentials. If this property is unset, or is set to anything other than a
Boolean, the web service will default to True.
Note that setting this property to False does not prevent a single session
from being viewed within multiple tabs inside the same browser. This value
can be overridden by using the ANCHORE_ALLOW_SHARED_LOGIN environment
variable.
allow_shared_login: True
The (optional) redis_flushdb key specifies if the Redis datastore containing
user session keys and data is emptied on application startup. If the datastore
is flushed, any users with active sessions will be required to
re-authenticate.
If this property is unset, or is set to anything other than a Boolean, the web
service will default to True. This value can be overridden by using the
ANCHORE_REDIS_FLUSHDB environment variable.
redis_flushdb: True
The (optional) custom_links key allows a list of up to 10 external links to
be provided (additional items will be excluded). The top-level title key
provided the label for the menu (if present, otherwise the string “Custom
External Links” will be used instead).
Each link entry must have a title of greater than 0-length and a valid URI. If
either item is invalid, the entry will be excluded.
custom_links:
title: Custom External Links
links:
- title: Example Link 1
uri: https://example.com
- title: Example Link 2
uri: https://example.com
- title: Example Link 3
uri: https://example.com
- title: Example Link 4
uri: https://example.com
- title: Example Link 5
uri: https://example.com
- title: Example Link 6
uri: https://example.com
- title: Example Link 7
uri: https://example.com
- title: Example Link 8
uri: https://example.com
- title: Example Link 9
uri: https://example.com
- title: Example Link 10
uri: https://example.com
The (optional) force_websocket key specifies if the WebSocket protocol must
be used for socket message communications. By default, long-polling is
initially used to establish the handshake between client and web service,
followed by a switch to WS if the WebSocket protocol is supported.
If this value is unset, or is set to anything other than a Boolean, the web
service will default to False.
This value can be overridden by using the ANCHORE_FORCE_WEBSOCKET
environment variable.
force_websocket: False
The (optional) authentication_lock keys specify if a user should be
temporarily prevented from logging in to an account after one or more failed
authentication attempts. For this feature to be enabled, both values must be
whole numbers greater than 0. They can be overridden by using the
ANCHORE_AUTHENTICATION_LOCK_COUNT and ANCHORE_AUTHENTICATION_LOCK_EXPIRES
environment variables.
The count value represents the number of failed authentication attempts
allowed to take place before a temporary lock is applied to the username. The
expires value represents, in seconds, how long the lock will be applied for.
Note that, for security reasons, when this feature is enabled it will be
applied to any submitted username, regardless of whether the user exists.
authentication_lock:
count: 5
expires: 300
The (optional) enable_add_repositories key specifies if repositories can be
added via the application interface by either administrative users or standard
users. In the absence of this key, the default is True. When enabled, this
property also suppresses the availability of the Watch Repository toggle
associated with any repository entries displayed in the Artifact Analysis
view.
Note that in the absence of one or all of the properties, the default is also
True. Thus, this key, and a child key corresponding to an account type (that
is itself explicitly set to False) must be set for the feature to be
disabled for that account.
The (optional) ldap_timeout and ldap_connect_timeout keys respectively
specify the time (in milliseconds) the LDAP client should let operations stay
alive before timing out, and the time (in milliseconds) the LDAP client should
wait before timing out on TCP connections. Each value must be a whole number
greater than 0.
When these values are unset (or set incorrectly) the app will fall back to
using a default value of 6000 milliseconds. The same default is used when
the keys are not enabled.
These value can be overridden by using the ANCHORE_LDAP_AUTH_TIMEOUT and
ANCHORE_LDAP_AUTH_CONNECT_TIMEOUT environment variables.
ldap_timeout: 6000
ldap_connect_timeout: 6000
The (optional) custom_message key allows you to provide a message that will
be displayed on the application login page below the Username and
Password fields. The key value must be an object that contains:
A title key, whose string value provides a title for the message—which can
be up to 100 characters
A message key, whose string value is the message itself—which can be up to
500 characters
Note: Both title and message values must be present and contain at
least 1 character for the message box to be displayed. If either value
exceeds the character limit, the string will be truncated with an ellipsis.
The (optional) log_level key allows you to set the descriptive detail of the
application log output. The key value must be a string selected from the
following priority-ordered list:
error
warn
info
http
debug
Once set, each level will automatically include the output for any levels
above it—for example, info will include the log output for details at the
warn and error details, whereas error will only show error output.
This value can be overridden by using the ANCHORE_LOG_LEVEL environment
variable. When no level is set, either within this configuration file or by
the environment variable, a default level of http is used.
log_level: 'http'
The (optional) enrich_inventory_view key allows you to set whether the
Kubernetes feature should aggregate and include compliance and
vulnerability data from the reports service. Setting this key to be False
can increase performance on high-volume systems.
This value can be overridden by using the ANCHORE_ENRICH_INVENTORY_VIEW
environment variable. When no flag is set, either within this configuration
file or by the environment variable, a default setting of True is used.
enrich_inventory_view: True
The (optional) enable_prometheus_metrics key enables exporting monitoring
metrics to Prometheus. The metrics are made available on the /metrics
endpoint.
This value can be overridden by using the ANCHORE_ENABLE_METRICS environment
variable. When no flag is set, either within this configuration file or by the
environment variable, a default setting of False is used.
enable_prometheus_metrics: False
Sections of the UI configuration settings can now be found from within the UI itself! Navigate to the ‘System’ heading in the sidebar and then select ‘Configuration’. Here you will see some of the exposed configuration options:
NOTE: The latest default UI configuration file can always be extracted from
the Enterprise UI container to review the latest options, environment overrides
and descriptions of each option using the following process:
The anchorectl command can be configured with command-line arguments, environment variables, and/or a configuration file. Typically, a configuration file should be created to set any static configuration parameters (your Anchore Enterprise’s URL, logging behavior, cataloger configurations, etc), so that invocations of the tool only require you to provide command-specific parameters as environment/cli options. However, to fully support stateless scripting, a configuration file is not strictly required (settings can be put in environment/cli options).
Important AnchoreCTL is version-aligned with Anchore Enterprise for major/minor. Please refer to the Enterprise Release Notes for the supported version of AnchoreCTL.
The anchorectl tool will search for an available configuration file using the following search order, until it finds a match:
.anchorectl.yaml
anchorectl.yaml
.anchorectl/config.yaml
~/.anchorectl.yaml
~/anchorectl.yaml
$XDG_CONFIG_HOME/anchorectl/config.yaml
Note The anchorectl can also utilize inline Environment Variables which override any configuration file settings.
Note The ANCHORECTL_CONFIG environment variable can be used to specify a custom location for the AnchoreCTL YAML file.
For the most basic functional invocation of anchorectl, the only required parameters are listed below:
url:""# the URL to the Anchore Enterprise API (env var: "ANCHORECTL_URL")username:""# the Anchore Enterprise username (env var: "ANCHORECTL_USERNAME")password:""# the Anchore Enterprise user's login password (env var: "ANCHORECTL_PASSWORD")
For example, with our Docker Compose quickstart deployment of Anchore Enterprise running on your local system, your ~/.anchorectl.yaml would look like the following
A good way to quickly test that your anchorectl client is ready to use against a deployed and running Anchore Enterprise endpoint is to exercise the system status call, which will display status information fetched from your Enterprise deployment. With ~/.anchorectl.yaml installed and populated correctly, no environment or parameters are required:
anchorectl system status
✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE │ HOST ID │ URL │ UP │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer │ anchore-quickstart │ http://analyzer:8228 │ true │ available │ 5180 │ 5.18.0 │
│ policy_engine │ anchore-quickstart │ http://policy-engine:8228 │ true │ available │ 5180 │ 5.18.0 │
│ apiext │ anchore-quickstart │ http://api:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports │ anchore-quickstart │ http://reports:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports_worker │ anchore-quickstart │ http://reports-worker:8228 │ true │ available │ 5180 │ 5.18.0 │
│ data_syncer │ anchore-quickstart │ http://data-syncer:8228 │ true │ available | 5180 │ 5.18.0 │
│ simplequeue │ anchore-quickstart │ http://queue:8228 │ true │ available │ 5180 │ 5.18.0 │
│ notifications │ anchore-quickstart │ http://notifications:8228 │ true │ available │ 5180 │ 5.18.0 │
│ catalog │ anchore-quickstart │ http://catalog:8228 │ true │ available │ 5180 │ 5.18.0 │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘
Congratulations you should now have a working AnchoreCTL.
Using Environment Variables
For some use cases being able to supply inline environment variables can be useful, see the following system status call as an example.
ANCHORECTL_URL="http://localhost:8228" ANCHORECTL_USERNAME="admin" ANCHORECTL_PASSWORD="foobar" anchorectl system status
✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE │ HOST ID │ URL │ UP │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ reports │ anchore-quickstart │ http://reports:8228 │ true │ available │ 5180 │ 5.18.0 │
│ analyzer │ anchore-quickstart │ http://analyzer:8228 │ true │ available │ 5180 │ 5.18.0 │
│ notifications │ anchore-quickstart │ http://notifications:8228 │ true │ available │ 5180 │ 5.18.0 │
│ apiext │ anchore-quickstart │ http://api:8228 │ true │ available │ 5180 │ 5.18.0 │
│ policy_engine │ anchore-quickstart │ http://policy-engine:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports_worker │ anchore-quickstart │ http://reports-worker:8228 │ true │ available │ 5180 │ 5.18.0 │
│ simplequeue │ anchore-quickstart │ http://queue:8228 │ true │ available │ 5180 │ 5.18.0 │
│ catalog │ anchore-quickstart │ http://catalog:8228 │ true │ available │ 5180 │ 5.18.0 │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘
All the environment variable options can be seen by using anchorectl --help
Using API Keys
If you do not want to expose your private credentials in the configuration file, you can generate an API Key that allows most of the functionality of anchorectl.
Please see Generating API Keys
Once you generate the API Key, the UI will give you a key value. You can use this key with the anchorectl configuration:
NOTE: API Keys authenticate using HTTP basic auth. The username for API keys has to be _api_key.
Without setting up ~/.anchorectl.yaml or any configuration file, you can interact using environment variables:
Using Distributed Analysis Mode
If you intend to use anchorectl in Distributed Analysis mode, then you’ll need to enable two additional catalogers (secret-search, and file-contents) to mirror the behavior of Anchore Enterprise defaults, when performing an image analysis in Centralized Analysis mode. Below are the ~/.anchorectl.yaml settings to mirror the Anchore Enterprise defaults.
The anchorectl tool has extensive built-in help information for each command and operation, with many of the parameters allowing for environment overrides. To start with anchorectl, you can run the command with --help to see all the operation sections available:
anchorectl --help
A convenient way to see your changes taking effect is to instruct anchorectl to output DEBUG level logs to the screen using the -vv flag, which will display the full configuration that the tool is using (including the options you set, plus all the defaults and additional configuration file options available).
anchorectl --vv
NOTE: if you would like to capture the full default configuration as displayed when running with -vv, you can paste that output as the contents of your .anchorectl.yaml, and then work with the settings for full control.
Please note that if your proxy uses an untrusted certificate you may also need the following:
exportANCHORECTL_HTTP_TLS_INSECURE=true
4.4 - Using the Analysis Archive
As mentioned in concepts, there are two locations for image analysis to be stored:
The working set: the standard state after analysis completes. In this location, the image is fully loaded and available for policy evaluation, content, and vulnerability queries.
The archive set: a location to keep image analysis data that cannot be used for policy evaluation or queries but can use cheaper storage and less db space and can be reloaded into the working set as needed.
To add an image to the archive, use the digest. All analysis, policy evaluations, and tags will be added to the archive.
NOTE: this does not remove it from the working set. To fully move it you must first archive and then delete image in the working set using AnchoreCTL or the API directly.
Archiving Images
Archiving an image analysis creates a snapshot of the image’s analysis data, policy evaluation history, and tags and stores in a different storage location and
different record location than working set images.
# anchorectl image list
✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────┬────────┐
│ TAG │ DIGEST │ ANALYSIS │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────┼────────┤
│ docker.io/ubuntu:latest │ sha256:33bca6883412038cc4cbd3ca11406076cf809c1dd1462a144ed2e38a7e79378a │ analyzed │ active │
│ docker.io/ubuntu:latest │ sha256:42ba2dfce475de1113d55602d40af18415897167d47c2045ec7b6d9746ff148f │ analyzed │ active │
│ docker.io/localimage:latest │ sha256:74c6eb3bbeb683eec0b8859bd844620d0b429a58d700ea14122c1892ae1f2885 │ analyzed │ active │
│ docker.io/nginx:latest │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────┴────────┘
# anchorectl archive image add sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
✔ Added image to archive
┌─────────────────────────────────────────────────────────────────────────┬──────────┬────────────────────────┐
│ DIGEST │ STATUS │ DETAIL │
├─────────────────────────────────────────────────────────────────────────┼──────────┼────────────────────────┤
│ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ archived │ Completed successfully │
└─────────────────────────────────────────────────────────────────────────┴──────────┴────────────────────────┘
Then to delete it in the working set (optionally):
NOTE: You may need to use –force if the image is the newest of its tags and has active subscriptions_
Artifact Lifecycle Policies are instruction sets which perform lifecycle events on certain types of artifacts.
Each policy can perform an action on a given artifact_type based on configured policy_conditions (rules/selectors).
As an example, a system administrator may create an Artifact Lifecycle Policy that will automatically delete any image that has an analysis date older than 180
days.
WARNING ⚠️
⚠️ These policies have the ability to delete data without archive/backup. Proceed with caution!
⚠️ These policies are GLOBAL they will impact every account on the system.
⚠️ These policies can only be created and managed by a system administrator.
Policy Components
Artifact Lifecycle Policies are global policies that will execute on a schedule defined by a cycle_timer within the
catalog service. services.catalog.cycle_timers.artifact_lifecycle_policy_tasks has a default time of every 12 hours.
The policy is constructed with the following parameters:
Artifacts Types - The type of artifacts the policy will consider. The current supported type is image.
Inclusion Rules - The set of criteria which will be used to determine the set of artifacts to work on. All criteria must be satisfied for the policy to enact on an artifact.
days_since_analyzed
Selects artifacts whose analyzed_at date is n days old.
If this value is set to less than zero, this rule is disabled.
An artifact that has not been analyzed, either because it failed analysis or the analysis is pending, will not be included.
even_if_exists_in_runtime_inventory
When true, an artifact will be included even if it exists in the Runtime Inventory.
When false, an artifact will not be included if it exists in the Runtime Inventory. Essentially protecting artifacts found in your runtime inventory. Please review the Inventory Time-To-Live for information on how to prune the Runtime Inventory.
include_base_images
When true, images that have ancestral children will be included.
When false, images that have ancestral children will not be included.
Note: These are evaluated per run. As children are deleted, a previously excluded parent image may too become eligible for deletion.
Policy Actions - After the policy determines a set of artifacts that satisfy the Inclusion Rules, this
is the action which will be performed on them. The current supported action is delete.
Actioned artifacts will have a matching system Event created for audit and notification purposes.
Policy Interaction
If more than one policy is enabled, each policy will work independently, using its set of rules to determine if any
artifacts satisfy its criteria. Each policy will apply its action on the set of artifacts.
Creating a new Artifact Lifecycle Policy
Due to the potentially destructive nature of these policies every parameter must be explicitly declared when creating a
new policy. This means all policy rules must be explicitly configured or explicitly disabled.
Note: it is possible to request “deleted” policies through this API for audit reasons. The deleted_at field will be null, and enabled will be true if the policy is active.
anchorectl system artifact-lifecycle-policy get 5620b641-a25f-4b1f-966c-929281a41e16
✔ Fetched artifact-lifecycle-policy
Name: 2023-11-22T13:02:24.621Z
Policy Conditions:
- artifactType: image
daysSinceAnalyzed: 1
evenIfExistsInRuntimeInventory: true
includeBaseImages: false
version: 1
Uuid: 5620b641-a25f-4b1f-966c-929281a41e16
Action: delete
Deleted At:
Enabled: true
Updated At: 2023-11-22T13:02:24Z
Created At: 2023-11-22T13:02:24Z
Description: test description
Delete a policy
Note: for the purposes of audit the policy will still remain in the system. It will be disabled and marked deleted. This will effectively make it hidden unless explicitly requested by its UUID through the API.
# anchorectl system artifact-lifecycle-policy delete 73226831-9140-4d27-a922-4a61e43dbb0d
✔ Deleted artifact-lifecycle-policy
No results
4.6 - Content Hints
For an overview of the content hints and overrides features, see the feature overview
Enabling Content Hints
This feature is disabled by default to ensure that images may not exercise this feature without the admin’s explicit approval.
In each analyzer’s config.yaml file (by default at /config/config.yaml):
Set the enable_hints: true setting in the analyzer service section of config.yaml.
If using the default config.yaml included in the image, you may instead set an environment variable (e.g for use in our provided config for Docker Compose):
ANCHORE_HINTS_ENABLED=true environment variable for the analyzer service.
For Helm: see the Helm installation instructions for enabling the hints file mechanism when deploying with Helm.
4.7 - Using Dashboard
Overview
The Dashboard is your configurable landing page where insights into the collective status of your container image environment can be displayed through various widgets. Utilizing the Enterprise Reporting Service, the widgets are hydrated with metrics which are generated and updated on a cycle, the duration of which is determined by application configuration.
Note: Because the reporting data cycle is configurable, the results shown in this view may not precisely reflect actual analysis output at any given time.
For more information on how to modify this cycle or the Reporting Service in general, please refer to the Reporting Service.
The following sections in this document describe how to add widgets to the dashboard and how to customize the dashboard layout to your preference.
Widgets
Adding a Widget
To add a new widget, click the Add New Widget button present in the Dashboard view. Or, if no widgets are defined, click the Let’s add one! button shown.
Upon doing so, a modal will appear with several properties described below:
Property
Description
Name
The name shown within the Widget’s header.
Mode
‘Compact’ a widget to keep data easily digestable at a glance. ‘Expand’ to view how your data has evolved over a configurable time period.
Collection
The collection of tags you’re interested in. Toggle to view metrics for all tags - including historical ones.
Time Series Settings
The time period you wish to view metrics for within the expanded mode.
Type
The category of information such as ‘Vulnerabilities’ or ‘Policy Evaluations’ which denotes what metrics are capable of being shown.
Metrics
The list of metrics available based on Type.
Once you enter the required properties and click OK, the widget will be created and any metrics needed to hydrate your Dashboard will be fetched and updated.
Note: All fields except Type are editable by clicking the button shown on the top right of the header when hovering over a widget.
Viewing Results
The Reporting Service at its core aggregates data on resources across accounts and generates metrics. These metrics, in turn, fuel the Dashboard’s mission to provide actionable items straight to the user - that’s you!
Leverage these results to dive into the exact reason for a failed policy evaluation or the cause and fix of a critical vulnerability.
Vulnerabilities
Vulnerabilities are grouped and colored by severity. Critical, High, and Medium vulnerabilities are included by default but you can toggle which ones are relevant to your interests by clicking the button.
Clicking one of these metrics navigates you to a view (shown below) where you can browse the filtered set of vulnerabilities matching that severity.
For more info on a particular vulnerability, click on its corresponding button visible in the Links column. To view the exact tags affected, drill down to a specific repository by expanding the arrows ().
View that tag’s in-depth analysis by clicking on the value within its Image Digest column.
Policy Evaluations
Policy Evaluations are grouped by their evaluation outcome such as Pass or Fail and can be further filtered by the reason for that result. All reasons are included by default but as with other widget properties, they can be edited by clicking the button.
Clicking one of these results navigates you to a view (shown below) where you can browse the affected and filtered set of tags.
Dig down to a specific tag by expanding the arrow () shown on the left side of the row.
Navigate using the Image Digest value to view even more info such as the specific policy being triggered and what exactly is triggering it. If you’re interested in viewing the contents of your policy, click on the Policy ID value.
Dashboard Configuration
After populating your Dashboard with various widgets, you can easily modify the layout by using some methods explained below:
Click this icon shown in the top right or the header of a widget to Drag and Drop it into a new location.
Click this icon shown in the top right of a widget to Expand it and include a graphical representation of how your data has evolved over a time period of your choice.
Click this icon shown in the top right of a widget to Compact it into an easily digestable view of the metrics you’re interested in.
Click this icon shown in the top right of a widget to Delete it from the dashboard.
4.8 - Data Synchronization
Introduction
In this section, you’ll learn how Anchore Enterprise ingests the data used for analysis and vulnerability management.
Enterprise manages four datasets:
Vulnerability Database (grypedb)
ClamAV Malware Database
CISA KEV (Known Exploited Vulnerabilities)
EPSS (Exploit Prediction Scoring System)
Included about the requirements for running the data syncer service.
You can read more about how Feeds works in the feature overview.
Requirements
Network Ingress
The following two FQDNs need to be allowlisted in your network to allow the Data Syncer Service to communicate with the Anchore Data Service:
The Data Syncer Service will check every hour if there is new data available from the Anchore Data Service.
If it finds a new dataset then it will sync it down immediately.
It will also trigger the Policy Engine Service to reprocess the data to make it available for policy evaluations. The analyzer checks the
data syncer for a new ClamAV Malware signature database before every malware scan (if enabled).
Controlling Which Feeds and Groups are Synced
During initial data sync, you can always query the progress and status of the feed sync using anchorectl.
Using the Config File to Include/Exclude Feeds and Package Types when scanning for vulnerabilities
With the feed service removed, Enterprise no longer supports excluding certain providers and package types from the vulnerability feed.
To ensure the same experience when using the product, you can now exclude certain providers and package types from matching vulnerabilities.
When Anchore Enterprise runs, the Data Syncer Service will begin to synchronize security feed data from the Anchore Data Service.
CVE data for Linux distributions such as Alpine, CentOS, Debian, Oracle, Red Hat and Ubuntu will be downloaded.
The initial sync typically take anywhere from 1-5 minutes depending on your environment and network speed. After that the Data Syncer Service will check every hour if there is new data available from the Anchore Data Service. If it finds a new dataset then it will sync it down immediately.
For air-gapped environments, please see the Air-Gapped documentation.
Checking Feed Status
Feed information can be retrieved through the API and AnchoreCTL.
This command will report list the feeds synchronized by Anchore Enterprise, last sync time and current record count.
Note: Time is reported as UTC, not local time.
Manually initiating feed sync
You can initiate a manual sync of the latest datasets which tells the Data Syncer Service to download the latest feed data from the Anchore Data Service.
# anchorectl feed sync
✔ Synced feeds
This will also inform the policy-engine to sync down the new dataset if the Data Syncer Service has successfully downloaded the latest data.
Forcing a full resync
If there is a scenario where you want the Data Syncer Service to force download the latest datasets and overwrite the existing data, you can use the --force_sync flag.
As of v5.10, AnchoreCTL is now capable of importing and exporting feeds. AnchoreCTL will be downloading the datasets from the Anchore Data Service and then importing them into the Anchore Enterprise deployment. For more detail regarding the Anchore Data Service, please see Anchore Data Service.
Configuration
To configure your Anchore Enterprise deployment to work in an air-gapped environment, you will need to disable the Data Syncer Service’s automatic feed sync.
To confirm auto sync is disabled the following log will be emitted by the data-syncer service upon startup (in the event you wanted to double check):
[INFO] [anchore_enterprise.services.data_syncer.service/handle_data_sync():33] | Auto sync is disabled. Skipping data sync.
Downloading and Importing Datasets
Note
For downloading and uploading datasets using anchoreCTL, ensure you have greater than 2GB of free space in the current working directory of your host system in which you are saving the bundle.
Once installed, AnchoreCTL can be used to download the latest feed data from the Anchore Data Service. This data can then be moved across the air
gap and uploaded into your Anchore Enterprise deployment.
Downloading the Datasets
Run the following command outside your air-gapped environment to download the datasets
Using your license key
anchorectl airgap feed download -f <filename> -k <your api key>
Using your license file
anchorectl airgap feed download -f <filename> -l <path to your license file>
To get your API key, check your license file for a field called apiKey
This command will download all the feeds from the hosted service to the file specified by ‘-f’.
This command can take a bit of time to return depending on your connection speed.
The resulting file will be approximately 0.5 GB in size as of this writing but will continue to grow as more data is added to the feeds.
Importing the Datasets
Take a copy of this file and move it into your air-gapped environment. Then run the following command to import the feeds into your Anchore Enterprise deployment.
anchorectl airgap feed upload -f <filename>
Your Analyzer Service and Policy Engine Service will now be able to fetch the latest data from the Data Syncer Service as normal.
This procedure must be repeated each time you want to update the datasets in your air gapped environment.
Note
Use the same file for downloading data every time. AnchoreCTL will read the metadata from the file and determine if it needs to download any newer data or if you already have the latest. If it does download newer data, the metadata in the file is overwritten with the latest metadata, this way you will not have to perform any unnecessary downloads.
4.8.4 - Status Page
A live status page is available for real-time updates on the Anchore Data Service, including information on outages, maintenance, and options for subscribing to notifications.
You can access the status page at https://status.anchore-enterprise.com
The status page is updated in real-time and provides information on the following:
Current Status - The current status of the Anchore Data Service.
Incidents - A list of any ongoing incidents that may be affecting the Anchore Data Service.
Scheduled Maintenance - A list of any upcoming maintenance windows that may affect the Anchore Data Service.
Subscribe to Updates - Options for subscribing to updates via email, SMS, or webhook.
Past Incidents - A list of past incidents that have affected the Anchore Data Service.
Historical Uptime - Historical uptime data for the Anchore Data Service.
The status page does not auto refresh. You have to refresh it manually to see the latest status.
4.9 - Integration
Overview
Anchore Enterprise exposes an API through which external software entities like inventory
agents can report their health status. These software entities (agents, plugins etc.) serve
as a mechanism for external systems like Kubernetes clusters or container image repositories
to integrate with Anchore Enterprise.
Enterprise refer these software entities simply as integrations. A deployed Kubernetes
Inventory agent is example of an integration instance. A deployed Kubernetes Admission
Controller is another example of an integration instance.
Integration health reporting
An integration instance can send health reports to Anchore Enterprise, which in turn maintains
a record of those health reports. They are used by Enterprise to determine the status of the
integration instances.
An integration instance can send a health report at most every 30 seconds. To prevent
unlimited growth of stored health reports, Anchore’s Catalog Service will prune old health
reports.
The configuration setting below allow you to specify how long health reports should be
kept by Catalog Service. This is the default setting found in the values file.
Anchore services produce detailed logs that contain information about user interactions, internal processes, warnings and errors.
Log Level
The verbosity of the logs is controlled using the logging.log_level setting in config.yaml (for manual installations) or the corresponding ANCHORE_LOG_LEVEL environment variable (for docker compose or Helm installations) for each service.
The log levels are DEBUG, INFO, WARNING, ERROR, and CRITICAL, where the default is INFO. Most of the time, the default level is sufficient as the logs will container WARNING, ERROR and CRITICAL messages as well. But for deep troubleshooting, it is always recommended to increase the log level to DEBUG in order to ensure the availability of the maximum amount of information.
Changing Log Level via the API
The log level can be adjusted via the API without having to redeploy services:
The log level can also be modified by a user with the system configuration permissions by navigating to System -> Configuration. Each service’s log level can be configured individually. Example for Analyzer Log Level:
Structured Logging (JSON)
Anchore services can be configured to log in JSON format. This is particularly helpful if users ship logs to an external log aggregator.
Helm
With our Helm chart’s values.yaml, users can change the structured logging boolean in the following section:
With Docker Compose, users will need to mount the config.yaml file into the containers with the following section modified to enable structured logging:
Make the modifications in the config.yaml to enable structured logging, and mount it to each service. The following is an example for the API service, but for each service that structured logging needs to be enabled on, the config.yaml will need to be mounted similarly as a volume:
# The primary API endpoint serviceapi:[...]volumes:- ./license.yaml:/license.yaml:ro- ./config.yaml:/config/config.yaml:ro
4.11 - Malware & Cataloger Scans
Malware & Cataloger Scanning Overview
When an Image is Analyzed/Scanned you have the ability to configure the process to best suit your particular use case and/or desired security control.
After discovery these data can later be used within Anchore’s policy engine rules and gates. Please don’t forget to review this configuration too.
Both the Malware and Catalogers offer new capabilities and details on these are as follows:
Malware
For an overview of the feature and how it works. See Malware Scanning
Catalogers
During Analysis/Scans of your images, Anchore has the ability to run extra catalogers or searches. These are as follows:
retrieve_files - retrieve and index files matching a configured file list
secret_search and content_search - perform a search across file contents for a configured regexp match. Findings are then cataloged accordingly.
Limitations and Resource Usage
Both the Malware and Catalogers will impact analysis/scanning time, and this time will depend on the size and number of files the image contains.
Anchore supports sources. However, sources currently need to be analyzed with Syft and not AnchoreCTL. Syft does not currently support catalogers or malware checks.
Where possible, and use case depending, you should offload to Distributed Scanning/Analysis to reduce analyzer compute load on your central Anchore Deployment.
Malware
Files in an image which are greater than 2GB will be skipped due to a limitation in ClamAV. Any skipped file will be identified with a Malware Signature as ANCHORE.FILE_SKIPPED.MAX_FILE_SIZE_EXCEEDED.
Malware scanning can ONLY operate when using Centralized Analysis and NOT Distributed Analysis.
Catalogers
Running extra catalogers will require more resources and time to perform analysis of images. Please take this into consideration when enabling and defining your regexp values.
This can be controlled by limiting the search with MAXFILESIZE to limit the search to large and/or very small files.
Enabling & Disabling Malware Scans & Catalogers
The process for enabling and configuring the Malware and other catalogers differs between Helm and Compose deployments.
Additionally, there are two modes which you scan/anaylsis images and therefore two places that can configure this capability in 1. Distributed Mode 2. Centralized mode
For Distributed Analysis, the catalogers are configured in the AnchoreCTL Configuration.
For Centralized Analysis, the catalogers are configured in the centralized Anchore Deployment via the Analyzer config documented on this page.
Helm
Update the Helm values.yaml file. Below is an example configuration with Malware, retrieve_files, secret_search enabled. Helm will take these values and define a ConfigMap in your Anchore Kubernetes deployment.
Malforming this file can cause the Anchore Analyzer to fail on all image analysis!
anchoreConfig:analyzer:malware:configFile:retrieve_files:file_list:- '/etc/passwd'secret_search:match_params:- MAXFILESIZE=10000regexp_match:- "AWS_ACCESS_KEY=(?i).*aws_access_key_id( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9]).*"- "AWS_SECRET_KEY=(?i).*aws_secret_access_key( *=+ *).*(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=]).*"- "PRIV_KEY=(?i)-+BEGIN(.*)PRIVATE KEY-+"- "DOCKER_AUTH=(?i).*\"auth\": *\".+\""- "API_KEY=(?i).*api(-|_)key( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20,60}(?![A-Z0-9]).*"# - "ALPINE_NULL_ROOT=^root:::0:::::$"### Uncomment content_search: {} to configure file content searching# Very expensive operation - recommend you carefully test and review# content_search:# match_params:# - MAXFILESIZE=10000# regexp_match:# - "EXAMPLE_MATCH="### Malware scanning occurs only at analysis time when the image content itself is availablemalware:clamav:# Set to true to enable the malware scanenabled:true# Set to true to enable the db refresh on each scandb_update_enabled:true# Maximum time in milliseconds that ClamAV scan is allowed to run (default is 30 minutes)max_scan_time:1800000
Please review the helm chart example values.yaml file for further detail.
Docker Compose
The Malware and Catalogers can be configured and enabled in the ‘analyzer_config.yaml’ file. This file needs to then be mounted as a file volume in your Anchore Docker Compose file under the analyzer: service as shown below:
This file should contain the required configuration parameters. Please see the following example and adjust as required.
malware:clamav:# Set this to true to enable the malware scanenabled:true# Set this to false to turn off the db refresh on each scandb_update_enabled:trueretrieve_files:max_file_size_kb:1000file_list:- '/etc/passwd'- '/etc/services'- '/etc/sudoers'secret_search:match_params:- MAXFILESIZE=10000regexp_match:- "AWS_ACCESS_KEY=(?i).*aws_access_key_id( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9]).*"- "AWS_SECRET_KEY=(?i).*aws_secret_access_key( *=+ *).*(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=]).*"- "PRIV_KEY=(?i)-+BEGIN(.*)PRIVATE KEY-+"- "DOCKER_AUTH=(?i).*\"auth\": *\".+\""- "API_KEY=(?i).*api(-|_)key( *=+ *).*(?<![A-Z0-9])[A-Z0-9]{20,60}(?![A-Z0-9]).*"## Uncomment content_search: {} to configure file content searching# Very expensive operation - recommend you carefully test and review# content_search:# match_params:# - MAXFILESIZE=10000# regexp_match:# - "EXAMPLE_MATCH="
Malware - Disabling DB Updates
The db_update_enabled property of the malware.clamav object shown above in the analyzer_config.yaml controls whether the analyzer will ask the data syncer for the latest ClamAV database before each
analysis execution. By default, it is enabled and should be left on for up-to-date scan results. The db version is returned in the metadata section of the scan results available from the Anchore Enterprise API.
You can disable the update if you want to mount an external volume to provide the db data in /home/anchore/clamav/db inside the container (must be read-write for the Anchore user) This can be used
to cache or share a db across multiple analyzers (e.g. using AWS EFS) or to support air-gapped deployments where the db cannot be automatically updated from deployment itself.
Malware - Advanced Configuration
The path for the db and db update configuration are also available as environment variables inside the analyzer containers. These should not need to be used in most cases, but
for air-gapped or other installation where the default configuration is not sufficient they are available for customization.
Name
Description
Default
ANCHORE_CLAMAV_DB_DIR
Location of the db dir to read/write
/home/anchore/clamav/db
For most cases, Anchore uses the default values for the clamscan invocations.
If you would like to override any of the default values of those commands or replace existing ones, you can add the following to the analyzer_config.yaml:
Anchore Enterprise can be configured to have a size limit for images being added for analysis. Images that exceed the configured maximum size will not be added to Anchore and the catalog service will log an error message providing details of the failure. This size limit is applied when adding images to Anchore Enterprise via AnchoreCTL, tag subscriptions, and repository watchers.
By default the max_compressed_image_size_mb feature is disabled.
It can be enabled via the max_compressed_image_size_mb property in the Anchore Enterprise configuration file or by using the ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB env variable.
When a value greater than zero is supplied, the value represents the size limit in MB of the compressed image.
When a value less than zero is supplied, it will disable the feature and allow images of any size to be added to Anchore.
A value of 0 will prevent any images from being added.
Finally, non-integer values will cause bootstrap of the service to fail.
If using Docker Compose with the default config, this can be set through the ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB env variable on the catalog service.
If using Helm, it can be configurated by using the values file and adding the ANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB env variable to the catalog.extraEnv property.
4.13 - Custom Certificate Authority
If a custom CA certificate is required to access an external resource then the Trust Store in Anchore needs to be propagated to the following locations:
When might you need to add a CA Cert to Anchore Enterprise?
Using an SSL terminating network proxy in your Anchore deployment environment.
Anchore needs to be able to reach external https endpoints from vulnerability feeds to container registries.
Using a Container Registry with self-signed certificate or custom CA.
You can update the trust store OR use the –insecure option when configuring the registry in Anchore.
The operating system trust store is read by the skopeo utility (the tool used to interact with container registries) and python requests library that is used to access container registries to read manifests and pull image layers.
Adding your certificate(s)
Approach 1
The first approach is centred around creating a new Anchore Enterprise image and inserting the CA certs into the right places. You might need to perform this for both the Anchore Enterprise COre and Anchore Enterprise UI image.
The following Dockerfile illustrates an example of how this general process can be automated to produce your own container with a new custom CA cert installed.
1. Create Dockerfile
Example Dockerfile updating the certifi trust store for the Python Anchore Enterprise Image
FROM docker.io/anchore/enterprise:v5.X.X
USER root:root
COPY ./custom-ca.pem /home/anchore/venv/lib/python3.11/site-packages/certifi/
# This is to verify the CA's are in the correct format
RUN openssl crl2pkcs7 -nocrl -certfile /home/anchore/venv/lib/python3.11/site-packages/certifi/custom-ca.pem | openssl pkcs7 -print_certs -noout
COPY ./custom-ca.pem /etc/pki/ca-trust/source/anchors/
RUN update-ca-trust && trust list
RUN /usr/bin/cat /home/anchore/venv/lib/python3.11/site-packages/certifi/custom-ca.pem >> /home/anchore/venv/lib/python3.11/site-packages/certifi/cacert.pem
USER anchore:anchore
We suggest adding an indicator to the resulting image name that designates it as being custom built. Ex: enterprise:v5.X.X-custom
You will need to perform this on each build, store the new enterprise image in a private registry and update your Helm or Compose deployment to use the new image reference.
Approach 2 (Recommended)
This approach is about injecting the secrets/ca certs into the containers at runtime and therefore doesn’t require a new image to be built.
Docker Compose
Enterprise
The entrypoint of the enterprise container will enumerate certificates mounted at /home/anchore/certs, combine them with its built-in CAs and populate them all via the environment variables:
REQUESTS_CA_BUNDLE
SSL_CERT_DIR
The above should configure the running services to use the custom CA(s).
Please note that doing an exec into a container may not include the /docker-entrypoint.sh.
Simply supply your own custom certificate(s) as environment variable(s) and volume mount(s). The example below is supplying a custom CA for use with LDAP in the Anchore Enterprise UI image.
For Helm deployments, first create the Kubernetes secret that will store your cert(s). The example below is supplying multiple certs in a custom-ca-cert secret and anchore K8s namespace.
Please note there are other certs you can supply and configure anchoreConfig.internalServicesSSL & anchoreConfig.keys.privateKeyFileName
Additional Background
Operating System
To add a certificate to the operating system trust store the CA certificate should be placed in the /etc location that is appropriate for the container image being used.
For Anchore 4.5.X and newer, the base container is Red Hat Universal Base Image 9.X, which stores certs in /etc/pki/ca-trust/source/anchors/ and requires user to run update-ca-trust command as root to update the system certs.
Anchore Enterprise UI - Node.js
The Anchore Enterprise UI is powered by Node.js and as such, when the UI makes calls to external services such as LDAP it might require a certificate.
Please note that Node.js can also pull certificates from the Operating System store.
Anchore Enterpise loads the certificate into the NODE_EXTRA_CA_CERTS environment variable
Anchore Enterprise - Python
Certifi is a curated list of trusted certificate authorities that is used by the Python requests HTTP client library. The Python requests library is used by Anchore for all HTTP interactions, including when communicating with Anchore Feed service, when webhooks are sent to a TLS enabled endpoint and inbetween Anchore services if TLS has been configured. To update the Certifi trust store the CA certificate should be appended onto the cacert.pem file provided by the Certifi library.
For Enterprise 5.1.x and newer, Python was upgraded to python 3.11, certifi’s cacert.pem is installed in /home/anchore/venv/lib/python3.11/site-packages/certifi/cacert.pem
Debugging
How to know if you need a custom cert?
Have a proxy or custom CA in place? Can’t ignore self-signed certs? Then yes.
Ask your IT / Infrastructure Team, otherwise you can test the connections from your Anchore deployment/server to the service in question.
curl https://myregistry.example.com (if you see ssl verify errors then you might require a custom ca)
If you are able to ignore self-signed certs, you can do this for Container Registries in Anchore
Fetch, test and use the Custom Cert
If you have identified that you need to add a custom CA cert into Anchore. You can run the following to fetch and test the certificate before redeploying Anchore.
# fetch the certopenssl s_client -showcerts -servername myregistry.example.com -connect myregistry.example.com:443 > cacert.pem# test the certcurl -v --cacert=cacert.pem myregistry.example.com
You can take this certificate and add this to your Anchore deployment as described above.
4.14 - Network Proxies
As covered in the Network sections of the requirements document, Anchore requires three categories of network connectivity.
Registry Access
Network connectivity, including DNS resolution, to the registries from which Anchore needs to download images.
Feed Service
Anchore synchronizes feed data such as operating system vulnerabilities (CVEs) from Anchore Cloud Service. See Feeds Overview for the full list of endpoints.
Access to Anchore Internal Services
Anchore is composed of six smaller services that can be deployed in a single container or scaled out to handle load. Each Anchore service should be able to connect the other services over the network.
In environments were access to the public internet is restricted then a proxy server may be required to allow Anchore to connect to Anchore Cloud Feed Service or to a publicly hosted container registry.
Anchore can be configured to access a proxy server by using environment variables that are read by Anchore at run time.
https_proxy:
Address of the proxy service to use for HTTPS traffic in the following form: {PROTOCOL}://{IP or HOSTNAME}:{PORT}
eg. https://proxy.corp.example.com:8128
http_proxy: Address of the proxy service to use for HTTP traffic in the following form: {PROTOCOL}://{IP or HOSTNAME}:{PORT} eg. http://proxy.corp.example.com:8128
no_proxy: Comma delimited list of hostnames or IP address which should be accessed directly without using the proxy service.
eg. localhost,127.0.0.1,registry,example.com
Environment Variables to Control Proxy Behavior
Setting the endpoints to HTTP proxy:
Set both HTTP_PROXY and http_proxy environment variables for regular HTTP protocol use.
Set both HTTPS_PROXY and https_proxy environment variables for HTTP + TLS (HTTPS) protocol use.
Setting endpoints to exclude from proxy use:
Set both NO_PROXY and no_proxy environment variables to exclude those domains from proxy use defined in the preceding proxy configurations.
If using Docker Compose these need to be set in each service entry.
If using Helm Chart, set these in the extraEnv entry for each service.
Notes:
Do not use double quotes (") around the proxy variable values.
Authentication
For proxy servers that require authentication the username and password can be provided as part of the URL:
When setting up a network proxy, keep in mind that you will need to explicitly allow inter-service communication within the Anchore Enterprise deployment to bypass the proxy, and potentially other hostnames as well (e.g. internal registries) to ensure that traffic is directed correctly. In general, all Anchore Enterprise service endpoints (the URLs for enabled services in the output of an ‘anchorectl system status’ command) as well as any internal registries (the hostnames you may have set up with ‘anchorectl registry add –username …’ or as part of an un-credentialed image add ‘anchorectl image add registry:port/….’), should not be proxied (i.e. added to the no_proxy list, as described above).
If you wish to tune this further, below is a list of each component that makes an external URL fetch for various purposes:
Catalog: makes connections to image registries (any host added via ‘anchorectl registry add’ or directly via ‘anchorectl image add’)
Analyzer: same as catalog
Data Syncer: by default connects to Anchore Data Service for downloading vulnerability datasets. See Data Feeds and Data Synchronization for more details.
4.15 - TLS / SSL
The following sections describe how to configure TLS for Anchore API and/or Anchore Enterprise Services.
Please note that the UI service currently does not support listening via TLS.
Internal TLS for Anchore Enterprise Services Helm deployments
graph TB
subgraph External Traffic
user[User]
kubernetes[Ingress Controller]
end
subgraph Services with Internal TLS
api[API]
policy[Policy]
catalog[Catalog]
analyzer[Analyzer]
reports[Reports]
reportsworker[Reports Worker]
notifications[Notifications]
dataSyncer[Data Syncer]
simplequeue[Simple Queue]
end
subgraph Services Without TLS
ui[UI]
end
user -- HTTP/HTTPS --> kubernetes
kubernetes -- HTTPS --> api
kubernetes -- HTTP --> ui
api <--> policy
api <--> catalog
api <--> analyzer
api <--> reports
api <--> reportsworker
api <--> notifications
api <--> dataSyncer
api <--> simplequeue
policy <--> catalog
catalog <--> analyzer
analyzer <--> reports
reports <--> reportsworker
reportsworker <--> notifications
notifications <--> dataSyncer
dataSyncer <--> simplequeue
The following will configure Anchore Enterprise internal services to communicate with one another via TLS.
This script is provided as a demonstration of how to generate certificates (generate-anchore-tls-certs.sh):
If Custom CA certificates are required for LDAP or Postgres be sure to append them to the anchore-tls/ca-cert.crt & anchore-tls/anchore.pem file.
The script can be called as follows:
chmod +x generate-anchore-tls-certs.sh
# Provide values for your kubernetes namespace containing anchore and your helm release./generate-anchore-tls-certs.sh $NAMESPACE$RELEASE
Make the following adjustments in your helm values file:
You will need to perform a Helm install or upgrade to apply all the changes and restart all the pods.
If using an ingress controller see the next section.
Separating API & UI Ingress for Helm deployments
If you are using ingress to access Anchore Enterprise API & Anchore Enterprise UI then you will need to seperate the ingress configuration if you configure either External or Internal TLS.
Since API supports TLS and UI does not once TLS is enabled for API the ingress controller will need to send encrypted traffic to API and unencrypted traffic to UI.
Make the following change to your helm values file to configure the ingress to use TLS to communicate with the API service:
ingress:annotations:nginx.ingress.kubernetes.io/backend-protocol:"HTTPS"# These are optionalnginx.ingress.kubernetes.io/proxy-body-size:'0'nginx.ingress.kubernetes.io/proxy-read-timeout:'600'nginx.ingress.kubernetes.io/proxy-send-timeout:'600'uiHosts:[]
Add an ingress directly in kubernetes for the UI:
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:anchore-ingress-uiannotations:# These are optionalnginx.ingress.kubernetes.io/proxy-body-size:'0'nginx.ingress.kubernetes.io/proxy-read-timeout:'600'nginx.ingress.kubernetes.io/proxy-send-timeout:'600'spec:ingressClassName: nginx # NOTE:This could be another value such as albtls:[]rules:- host:anchore.yourdomain.comhttp:paths:- path:/pathType:Prefixbackend:service:name:anchore-enterprise-ui# Ensure this matches the name of the Anchore UI serviceport:number:80
Please note that configuring Internal TLS above also includes External TLS. If you performed the steps above then this is not necessary.
In the following example the external API service is configured to listen on port 443 and is configured with a certificate for its external hostname anchore.example.com
Each service published in the Anchore Enterprise configuration (apiext, catalog, simplequeue, analyzer, policy_engine and kubernetes_webhook) can be configured to use transport level security.
IP address of interface on which the service should listen (use ‘0.0.0.0’ for all - default)
port
Port on which service should listen.
ssl_enable
Enable transport level security
ssl_cert
name, including full path of private key file.
ssl_chain
[optional] name, including full path of certificate chain
The certificate files should be placed on a path accessible to the Anchore Enterprise service, for example in the /config directory which is typically mapped as a volume into the container. Note that the location outside the container will depend on your configuration - for example if you are volume mounting ‘/path/to/aevolume/config/’ on the docker host to ‘/config’ within the container, you’ll need to place the ssl files in ‘/path/to/aevolume/config/’ on the docker host, so that they are accessible in ‘/config/’ inside the container, before starting the service.
The ssl_chain file is optional and may be required by some certificate authorities. If your certificate authority provides a chain certificate then include it within the configuration.
Note: While a certificate may be purchased from a well-known and trusted certificate authority in some cases the certificate is signed by an intermediate certificate which is not included within a TLS/SSL clients trust stores. In these cases the intermediate certificate is signed by the certificate authority however without the full ‘chain’ showing the provenance and integrity of the certificate the TLS/SSL client may not trust the certificate.
Any certificates used by the Anchore Enterprise services need to be trusted by all other Anchore Enterprise services.
If an internal certificate authority is used the root certificate for the internal CA can be added to the Anchore Enterprise using the following procedure or SSL verification can be disabled by setting the following parameter:
internal_ssl_verify: True
4.16 - Notifications
Overview
Alert external endpoints (Email, GitHub, Slack, and more) about Anchore events such as policy evaluation results, vulnerability updates, and system errors with our new Notifications service. Configure notification endpoints and manage which specific events you need through Anchore Enterprise UI.
For more information on the Notifications Service in general, its concepts, and details on its configuration, please refer to the Notifications Service.
The following sections in this document describe the current endpoints available for configuration, the options provided for selecting events, the various actions you can do with a configuration (add, edit, test, and remove), and how to disable an endpoint as an admin.
Anchore Enterprise includes Notifications service to alert external endpoints about the system’s activity. Services that make up Anchore Enterprise generate events to record significant activity, such as an update, in the policy evaluation result or vulnerabilities for a tag, or an error analyzing an image. This service provides a mechanism to selectively notify events of interest to supported external endpoints. The actual notification itself depends on the endpoint - formatted message to Slack, email and MS Teams endpoints, tickets in GitHub and Jira endpoints, and JSON payload to webhook endpoint.
Glossary
Event
An information packet generated by Anchore Enterprise service to indicate some activity.
Endpoint
External tool capable of receiving messages such as Slack, GitHub, Jira, MS Teams, email or webhook.
Endpoint Configuration
Connection information of the endpoint used for sending messages.
Selector
Criteria for selecting events to be notified.
Installation
Anchore Enterprise Notifications is included with Anchore Enterprise, and is installed by default when deploying a trial quickstart with Docker Compose, or a production deployment Kubernetes.
Configuration
Enterprise Notifications Service
The service loads configuration from the notifications section of the config.yaml. See the following snippet of the configuration:
...services:notifications:enabled:truerequire_auth:trueendpoint_hostname:"<hostname>"listen:'0.0.0.0'port:8228cycle_timers:notifications:30# Set Anchore Enterprise UI base URL to receive UI links in notificationsui_url:"<enterprise-ui-url>"
The cycle_timers -> notifications controls how often the service checks for events in the system that need to be processed for notifications. Default is every 30 seconds.
The ui_url is used for constructing links to Enterprise UI pages in the notifications. Configure this property to the Enterprise UI’s base URL. This URL should be accessible from the endpoint receiving the notification for the links to work correctly. If the value is not set, the notification message is still be sent to the endpoint, but it won’t contain a clickable link to the Enterprise UI.
Note: Any changes to the configuration requires a restart of the service for the updates to take effect.
RBAC Permissions
In the Anchore Enterprise deployment, the table below lists the required actions and containing roles:
Description
Action
Roles
List all the available notification endpoints and their status
listNotificationEndpoints
Read Only, Read Write
List all available configurations for an endpoint
listNotificationEndpointConfigurations
Read Only, Read Write
Get an endpoint configuration and associated selectors
getNotificationEndpointConfiguration
Read Only, Read Write
Create an endpoint configuration and associated selectors
createNotificationEndpointConfiguration
Read Write
Update an endpoint configuration and associated selectors
updateNotificationEndpointConfiguration
Read Write
Delete an endpoint configuration and associated selectors
deletetNotificationEndpointConfiguration
Read Write
External Tools
To send notifications to an external tool/endpoint, the service requires connection information to that endpoint. See the following for the required information for each endpoint:
All endpoints in the Notifications service can be toggled as Enabled or Disabled. The endpoint status reflects the eabled or disabled state. By default, the status for all endpoints is enabled by default. Setting the endpoint status to disabled stops all notifications from going out to any configurations of that specific endpoint. This is a system-wide setting that can only be updated by the admin account. It is read-only for remaining accounts.
Endpoint Configuration
The endpoint configuration is the connection information such as URL, user credentials, and so on for an endpoint. The service allows multiple configurations per endpoint. Endpoint configurations are scoped to the account.
Selector
The services provides a mechanism to selectively choose notifications and route them to a configured endpoint. This is achieved using a Selector, which is a collection of filtering criteria. Each event is processed against a Selector to determine whether it is a match or not. If the Selector matches the event, a notification is sent to the configured endpoint by the service.
For a quick list of useful notifications and associated Selector configurations, see Quick Selection.
A Selector encapsulates the following distinct filtering criteria: scope, level, type, and resource type. Some allow a limited set of values, and others wildcards. The value for each criteria has to be set for the matching to compute correctly.
Scope
Allowed values: account, global
Events are scoped to the account responsible for the event creation.
account scope matches events associated with a user account.
global scope matches events of any and all users. global scope is limited to admin account only. Non-admin account users can only specify account as the scope.
Level
Allowed values: info, error, *
Events are associated with a level that indicates whether the underlying activity is informational or resulted in an error. - info matches informational events such as policy evaluation or vulnerabilities update, image analysis completion and so on.
error matches failures such as image analysis failure.
* will match all events.
Type
Allowed values: strings with or without regular expressions
Event types have a structured format <category>.<subcategory>.<event>. Thus, * matches all types of events. Category is indicative of the origin of the event.
system.* matches all system events.
user.* matches events that are relevant to individual consumption.
Omitting an asterisk will do an exact match. See the GET /event_types route definition in the external API for the list of event types.
In most cases, events are generated during an operation involving a resource. Resource type is metadata of that resource. For instance image_tag is the resource type in a policy evaluation update event. * matches all resource types if you are uncertain what resource type to use.
Quick Selection
The following Selector configurations are for notifying a few interesting events.
Receive
Scope
Level
Type
Resource Type
Policy evaluation and vulnerabilities updates
account
*
user.checks.*
*
User errors
account
error
user.*
*
User infos
account
info
user.*
*
Everything user related
account
*
user.*
*
System errors
account
error
system.*
*
System infos
account
info
system.*
*
Everything system related
account
*
system.*
*
All
account
*
*
*
All for every account (admin account only)
global
*
*
*
Notifications UI Walkthrough
Supported Endpoints
Email
Send notifications to a specific SMTP mail service.
GitHub
Version control for software development using Git.
JIRA
Issue tracking and agile product management software by Atlassian.
Slack
Team collaboration software tools and online services by Slack Technologies.
Teams
Team collaboration software tools and online services by Microsoft.
Webhook
Send notifications to a specific API endpoint.
Event Selector Options
When adding or editing a configuration, selecting which events to be notified on can be as easy as choosing one of the above three options: All Notification Events, Policy & Vulnerability Events, or Error Events.
Advanced users can select Add Custom Selector for more granularity:
In the example shown, we configure to be notified on all system info events affecting any resource associated with the user’s account. For an in-depth explanation on the provided properties and their possible values, view our Selector documentation.
Adding a Configuration
If you haven’t already defined a configuration for an endpoint, simply click Let’s Add One! as shown above. Once you have, add additional configurations with Add New Configuration as shown below.
Upon doing so, a modal will appear with various properties shown on the left side. Note that based on the type of endpoint, these properties may differ.
To view the various requirements, check the documentation for Email, GitHub, JIRA, Slack, and Teams.
For more information on adding a custom selector, please view our Selector documentation.
Prior to saving your new configuration, feel free to test with the Test Configuration button. Then save with OK.
Note: If OK is not enabled, be sure all required fields have been filled out.
Editing a Configuration
The process to edit a configuration entry is started by clicking Edit which is found within the Actions column as shown above.
The various fields available for editing are the same shown when adding the configuration. For additional info on a specific field, hover over the provided question icon circled with an orange ring next to the field name.
At any time, you can select Cancel to disregard any changes you’ve made.
For testing any new changes prior to saving them, click Test Configuration.
To save your changes, click OK. If OK is not enabled, be sure all required fields have been filled out.
Testing a Configuration
When viewing your configurations, testing is easy - just look under the Actions column and click Test for that entry.
Otherwise, when adding or editing a configuration, search for the test button pictured above. It can be found near the bottom of the modal, next to the Cancel and OK buttons.
Removing a Configuration
To remove a specific notification configuration, simply click on the Remove button (as shown above) within the Actions column for that entry.
Select Yes to proceed with the deletion process or No to cancel. Please note that once you agree to remove the configuration, you won’t be able to recover it.
Admin-specific Actions
Disabling Endpoints
As an admin, navigate to System > Notifications and click on the toggle visible in the lower-right corner of the specific endpoint you’re aiming to disable.
By default, all endpoints (such as Email, Slack, and Webhook) are enabled out of the box. Disabling a specific endpoint requires admin privileges as it ensures all notifications are stopped from going out to any configuration for that endpoint system-wide.
Note that users are still able to add, edit, test, and remove notification configuration items, but no event messages will be sent for that endpoint until it is re-enabled.
4.16.1 - Slack
Notifications to Slack are in the form of messages to a channel.
Create a Microsoft Teams endpoint configuration in the Notifications service either via Enterprise UI, or the API directly.
4.17 - Reports
Overview
Anchore Enterprise Reports aggregates data to provide insightful analytics and metrics for account-wide artifacts.
The service employs GraphQL to expose a rich API for querying the aggregated data and metrics.
NOTE: This service captures a snapshot of artifacts in Anchore Enterprise at a given point in time. Therefore,
analytics and metrics computed by the service are not in real time, and may not reflect most up-to-date state
in Anchore Enterprise.
Installation
Anchore Enterprise Reports is included with Anchore Enterprise, and is installed by default when deploying a trial
quickstart with Docker Compose, or a production deployment
Kubernetes.
How it works
One of the main functions of Anchore Enterprise Reports is aggregating data. The service keeps a summary of all current
and historical images and tags for every account known to Anchore Enterprise. It also maintains vulnerability reports
and policy evaluations generated using the active bundle for all the images and tags respectively.
Configuration
Anchore Enterprise Reports are broken up into two services:
The reports_worker service which is responsible for the ingress and egress of data into our reports.
The reports service which is responsible for the report generation.
Each service has a configuration section in the values file. Below are sample configurations and the default values.
...services:reports_worker:# Set enable_data_ingress to true for periodically syncing data from anchore enterprise into the reports serviceenable_data_ingress:true# Set enable_data_egress to true to periodically remove reporting data that has been removed in other parts of systemenable_data_egress:false# data_egress_window defines a number of days to keep reporting data following its deletion in the rest of system.# Default value of 0 will remove it on next task rundata_egress_window:0# data_refresh_max_workers is the maximum number of concurrent threads to refresh existing results (etl vulnerabilities and evaluations) in reports service. Set non-negative values greater than 0, otherwise defaults to 10data_refresh_max_workers:10# data_load_max_workers is the maximum number of concurrent threads to load new results (etl vulnerabilities and evaluations) to reports service. Set non-negative values greater than 0, otherwise defaults to 10data_load_max_workers:10cycle_timers:# Timers that describe how often each operation should runreports_image_load:600# MIN 300 MAX 100000 Default 600reports_tag_load:600# MIN 300 MAX 100000 Default 600reports_runtime_inventory_load:600# MIN 300 MAX 100000 Default 600reports_extended_runtime_vuln_load:1800# MIN 300 MAX 100000 Default 1800reports_image_refresh:7200# MIN 3600 MAX 100000 Default 7200reports_tag_refresh:7200# MIN 3600 MAX 100000 Default 7200reports_metrics:3600# MIN 1800 MAX 100000 Default 3600reports_image_egress:600# MIN 300 MAX 100000 Default 600reports_tag_egress:600# MIN 300 MAX 100000 Default 600runtime_report_generation:# Provides the ability to enable/disable individual runtime report loading.inventory_images_by_vulnerability:truevulnerabilities_by_k8s_namespace:truevulnerabilities_by_k8s_container:truevulnerabilities_by_ecs_container:truereports:# GraphiQL is a GUI for editing and testing GraphQL queries and mutations.# Set enable_graphiql to true and open http://<host>:<port>/v2/reports/graphql in a browser for reports APIenable_graphiql:true# This is the number of execution threads which will be used during report generation.max_async_execution_threads:1# Configure async_execution_timeout to adjust how long a scheduled query must be running for before it is considered timed out# This may need to be adjusted if the system has large amounts of data and reports are being prematurely timed out.# The value should be a number followed by "w", "d", or "h" to represent weeks, days or hoursasync_execution_timeout:"48h"# Set use_volume to `true` to have the reports worker buffer report generation to disk instead of in memory. This should be configured# in production systems with large amounts of data (10s of thousands of images or more). Scratch volumes should be configured for the reports pods# when this option is enabled.use_volume:false
Any changes to the configuration requires a restart of the service for the updates to take effect.
In an Anchore Enterprise deployment, any non-admin account user must at least have listImages permission
to execute queries against Reports API. There RBAC Role available called report-admin which provides permissions to administer reports and schedules. Please see Role-Based Access Control
for more information.
Data ingress
Reports_worker service handles data ingress from Anchore Enterprise via the following asynchronous processes triggered
periodically:
Loader: Compares the working set of images and tags in Anchore Enterprise with its own records. Based on the
difference, images and tags along with the vulnerability report and policy evaluations are loaded into the service.
Artifacts deleted from Anchore Enterprise are marked inactive in the service.
This process is triggered periodically as described by the cycle timers listed above.
Refresher: Refreshes the vulnerability report and policy evaluations of all the images and tags actively
maintained by the service.
This process is triggered periodically as described by the cycle timers listed above.
Reports service may miss updates to artifacts if they are added and deleted in between consecutive ingress processes.
Data ingress is enabled by default. It can be turned off with enable_data_ingress: false in the config.yaml snippet
shown previously. In a quickstart deployment, add ANCHORE_ENTERPRISE_REPORTS_ENABLE_DATA_INGRESS=false to the
environment variables section of the reports service in docker-compose.yaml. When the ingress is turned off, Reports
service will no longer aggregate data from Anchore Enterprise, metric computations will also come to a halt. However,
the service will continue to serve API requests/queries with the existing data.
Data egress
It is highly recommended that data egress is enabled and that an egress window value is set to prevent exponential DB growth. Removing old data from the system helps reduce DB disk size.
Provides the ability to remove data which is no longer active in Anchore Enterprise from the stored report data.
This process is disabled by default and controlled by the value enable_data_egress. A configuration setting to determine how old this data is prior to its removal data_egress_window is also available.
Metrics
Reports service comes loaded with a few pre-defined/canned metric definitions. A metric definition consists of an
identifier, readable name, description and the type of the metric. The type is loosely based on statsd metric types.
Currently, all the pre-defined metrics are of type ‘counter’ - a measure of the number of items that match certain
criteria. A value for each of these metric definitions is computed using the data aggregated by the service.
All metric values are computed periodically every hour (3600 seconds). To modify the interval, update
cycle_timers -> reports_metrics in the config.yaml snippet above. In a quickstart deployment, add
ANCHORE_ENTERPRISE_REPORTS_METRICS_INTERVAL_SEC=<interval-in-seconds> to the environment variables section of the
reports service in docker-compose.yaml.
See it in action
To see Reports service in the Enterprise UI, see Dashboard or
Reporting & Remediation view. The dashboard view utilizes metrics generated by the
service and renders customizable widgets. The reports view employs graphQL queries and aggregates the results into
multiple formats (CSV, JSON, and so on).
Using Anchore’s runtime inventory agents provides Anchore Enterprise access to what images are being used
in your deployments. This can help give insight into where vulnerabilities or policy violations are in your
production workloads.
Agents
Anchore provides agents for collecting the inventory of different container runtime environments:
As part of reporting on your runtime environment, Anchore maintains an active record of the containers, the images they run,
and other related metadata based on time they were last reported by an inventory agent.
The configuration setting below allow you to specify how long inventory should remain part of the Catalog Service’s working set.
These are the default settings found in the values file.
For each cluster/namespace reported from the inventory agent, the system will delete any previously reported
containers and images and replace it with the new inventory.
Note: The inventory_ttl_days is still needed to remove any cluster/namespaces that are no longer reported as well as
some of the supporting metadata (ie. pods, nodes). This value should be configured to be long enough that inventory isn’t incorrectly removed in case of an outage from the reporting agent.
The exact value depends on each deployment, but 7 days is a reasonable value here.
This will delete any container and image that has not been reported by an agent in the last 14 days. This includes its supporting metadata (ie. pods, nodes).
This will keep any containers, images, and supporting metadata reported by an inventory agent indefinitely.
Deleting Inventory via API
Where it is not desirable to wait for the Image TTL to remove runtime inventory images it is possible to manually delete inventory items via the API by issuing a DELETE to /v2/inventories with the following query parameters.
inventory_type (required) - either ecs or kubernetes
context (required) - it must match a context as seen by the output of anchorectl inventory list
Kubernetes - this is a combination of cluster name (as defined by the anchore-k8s-inventory config) and a namespace containing running containers e.g. cluster1/default.
ECS - this is the cluster ARN e.g. arn:aws:ecs:eu-west-2:123456789012:cluster/myclustername
image_digest (optional) - set if you only want to remove a specific image
e.g. DELETE /v2/inventories?inventory_type=<string>&context=<string>&image_digest=<string>
Using curl: curl -X DELETE -u username:password "http://{servername:port}/v2/inventories?inventory_type=&context=&image_digest=
4.19 - Storage Configuration
Storage During Analysis
Scratch Space
Anchore uses a local directory for image analysis operations including downloading layers and unpacking the image content
for the analysis process. This space is necessary on each analyzer worker service and should not be shared. The scratch
space is ephemeral and can have its lifecycle bound to that of the service container.
Layer Cache
The layer cache is an extension of the analyzer’s scratch space that is used to cache layer downloads to reduce analysis
time and network usage during the analysis process itself. For more informaiton, see, Layer Caching.
Storing Analysis Results
Anchore Enterprise is a data intensive system and uses external storage systems for all data persistence. None of the services
are stateful in themselves.
For structured data that must be quickly queried and indexed, Anchore relies on PostgreSQL as its primary data store. Any
database that is compatible with PostgresSQL 13 or higher should work, such as Amazon Aurora and Google Cloud SQL.
For less structured data, Anchore implements an internal object store that can be overlayed on different backend providers,
but defaults to also using the main postgres db to reduce the out-of-the-box dependencies. However, S3 is supported for leveraging external systems.
For more information on configuration and requirements for the core database and object stores see, Object Storage.
Analysis Archive
To aid in capacity management, Anchore provides a separate storage location where completed image analysis can be moved to. This reduces consumption of database capacity and primary object storage. It also removes the analysis from most API actions
but makes it available to restore into the primary storage systems as needed. The analysis archive is
configured as an alternate object store. For more information, see: Configuring Analysis Archive.
The Analysis Archive is an object store with specific semantics and thus is configured as an object store using the same
configuration options, just with a different config key: analysis_archive
Example configuration snippet for using the db for working set object store and S3 for the analysis archive:
By default, if no analysis_archive config is found or the property is not present in the config.yaml, the analysis archive
will use the object_store or archive (for backwards compatibility) config sections and those defaults (e.g. db if found).
Anchore stores all of the analysis archive objects in an internal logical bucket: analysis_archive that is distinct in
the configured backends (e.g a key prefix in the s3 bucket)
Changing Configuration
Unless there are image analyses actually in the archive, there is no data to move if you need to update the configuration
to use a different backend, but once an image analysis has been archived to update the configuration you must follow
the object storage data migration process found here. As noted in that guide, if you need
to migrate to/from an analysis_archive config you’ll need to use the –from-analysis-archive/–to-analysis-archive
options as needed to tell the migration process which configuration to use in the source and destination config files
used for the migration.
Common Configurations
Single shared object store backend: omit the analysis_archive config, or set it to null or {}
Different bucket/container: the object_store and analysis_archive configurations are both specified and identical
with the exception of the bucket or container values for the analysis_archive so that its data is split into a
different backend bucket to allow for lifecycle controls or cost optimization since its access is much less frequent (if ever).
Primary object store in DB, analysis_archive in external S3: this keeps latency low as no external service is
needed for the object store and active data but lets you use more scalable external object storage for archive data. This
approach is most beneficial if you can keep the working set of images small and quickly transition old analysis to the
archive to ensure the db is kept small and the analysis archive handles the data scaling over time.
4.19.2 - Database Storage
Anchore stores all metadata in a structured format in a PostgreSQL database to support API operations and searches.
Image digests to tag mapping (docker.io/nginx:latest is hash sha256:abcd at time t)
Image analysis content indexed for policy evaluation (files, packages, ..)
Feed data
vulnerability info
package info from upstream (gem/npm)
Accounts, users…
…
If the object store is not explicitly set to an external provider, then that data is also persisted in
the database but can be migrated
Reducing Database Storage Usage
Beyond enabling a non-DB object store there are some configuration
options to reduce database storage and IO used by Anchore.
Configuration of Indexed DB Storage for Package DB File Entries
There is a configuration option for the policy engine service to disable the usage of
the database for storing indexed package database entries from each analyzed image. This data represents the files in
each distro package and their metadata (digests and permissions) from each scanned image in the image_package_db_entries table.
That table is only used by the policy engine to deliver the policy trigger [‘packages.verify’],
but if you do not use that trigger then the use of the storage can be disabled thereby reducing database load and resource usage.
The data can be quite large, often in the thousands of rows per analyzed image, so for some customers that do not use this
data for policy, disabling the loading of this data can reduce database consumption significantly.
Disabling Indexed DB Storage for Package DB File Entries
In each policy engine’s config.yaml file, change:
enable_package_db_load: true
to
enable_package_db_load: false
You can configure this by adding an environment variable ANCHORE_POLICY_ENGINE_ENABLE_PACKAGE_DB_LOAD with your chosen value on the policy engine service for both Compose and Helm deployments. This is enabled or set to ’true’ by default.
Note that disabling the table usage will also disable support for the packages.verify trigger and any policies that have the
trigger in a rule will be considered invalid and return errors on evaluation. Any new policies that attempt to use the trigger
will be rejected on upload as invalid if the trigger is included.
Once this configuration is set, you may delete data in that db table to reclaim some database storage capacity. If
you’re interested in this option please contact support for guidance on this process.
Enabling Indexed DB Storage for Package DB File Entries
If you find that you do need the trigger, you can change the configuration to use the table then support will be
restored. However, any images analyzed while the setting was ‘false’ will need to be re-analyzed in order to
populate their data in that table correctly.
4.19.3 - Layer Caching
Once an image is submitted to Anchore Enterprise for centralized analysis the system will attempt to retrieve metadata about the image from the Docker registry and if successful will download the image and queue the image for analysis. Anchore Enterprise can run one or more analyzer services to scale out processing of images. The next available analyzer worker will process the image.
Docker Images are made up of one or more layers, which are described in the manifest. The manifest lists the layers which are typically stored as gzipped compressed TAR files.
As part of image analysis Anchore Enterprise will:
Download all layers that comprise an image
Extract the layers to a temporary file system location
Perform analysis on the contents of the image including:
Scan for secret materials (api keys, private keys, etc
Following the analysis the extracted layers and downloaded layer tar files are deleted.
In many cases the images will share a number of common layers, especially if images are built form a consistent set of base images. To speed up Anchore Enterprise can be configure to cache image layers to eliminate the need to download the same layer for many different images. The layer cache is displayed in the default Anchore Enterprise configuration. To enable the cache the following changes should be made:
Define a temporary directory for scratch image data
It is recommended that the cache data is stored in an external volume to ensure that the cache does not use up the ephemeral storage space allocated to the container host.
By default Anchore Enterprise uses the /tmp directory within the container to download and extract images. Configure a volume to be mounted into the container at a specified path and configure this path in config.yaml
tmp_dir: '/scratch'
In this example a volume has been mounted as /scratch within the container and config.yaml updated to use /scratch as the temporary directory for image analysis.
With the cache disabled the temporary directory should be sized to at least 3 times the uncompressed image size to be analyzed.
Enable the layer cache
Next, the layer cache should be enabled in order to tell the analyzer service to cache image layers.
To enable layer caching, adjust the layer_cache_max_gigabytes parameter in the analyzer section of the Anchore Enterprise Helm values file, for example:
In the above, the layer cache is set to 4 gigabytes. When enabled this should be sized to at least 3 times the uncompressed image size + 4 gigabytes.
The minimum size for the cache is 1 gigabyte.
The cache users a least recently used (LRU) policy.
The cache files will be stored in the anchore_layercache directory of the /tmp_dir volume, as noted above.
4.19.4 - Object Storage
Anchore Enterprise uses a PostgreSQL database to store structured data for images, tags, policies, subscriptions and metdata
about images, but other types of data in the system are less structured and tend to be larger pieces of data. Because of
that, there are benefits to supporting key-value access patterns for things like image manifests, analysis reports, and
policy evaluations. For such data, Anchore has an internal object storage interface that, while defaulted to use the
same Postgres database for storage, can be configured to use external object storage providers to support simpler capacity
management and lower costs. The options are:
PostgreSQL database (default)
S3 Object Store
The configuration for the object store is set in the catalog’s service configuration in the config.yaml.
4.19.4.1 - Migrating Data to New Drivers
Overview
To cleanly migrate data from one archive driver to another, Anchore Enterprise includes some tooling that automates the process in the ‘anchore-manager’ tool packaged with the system.
The migration process is an offline process; Anchore Enterprise is not designed to handle an online migration.
For the migration process you will need:
The original config.yaml used by the services already, if services are split out or using different config.yaml for different services, you need the config.yaml used by the catalog services
An updated config.yaml (named dest-config.yaml in this example), with the archive driver section of the catalog service config set to the config you want to migrate to
The db connection string from config.yaml, this is needed by the anchore-manager script directly
Credentials and resources (bucket etc) for the destination of the migration
At a high-level the process is:
Shutdown all anchore enterprise services and components. The system should be fully offline, but the database must be online and available. For a docker compose install, this is achieved by simply stopping the engine container, but not deleting it.
Prepare a new config.yaml that includes the new driver configuration for the destination of the migration (dest-config.yaml) in the same location as the existing config.yaml
Test the new dest-config.yaml to ensure correct configuration
Run the migration
Get coffee… this could take a while if you have a lot of analysis data
When complete, view the results
Ensure the dest-config.yaml is in place for all the components as config.yaml
Start anchore-engine
Migration Example Using Docker Compose Deployed Anchore Engine
The following is an example migration for an anchore-engine deployed via docker compose on a single host with a local postgresql container–basically the example used in ‘Installing Anchore Engine’ documents. At the end of this section, we’ll cover the caveats and things to watch for a multi-node install of anchore engine.
This process requires that you run the command in a location that has access to both the source archive driver configuration and the new archive driver configuration.
Step 1: Shutdown all services
All services should be stopped, but the postgresql db must still be available and running.
docker compose stop anchore-engine
Step 2: Prepare a new config.yaml
Both the original and new configurations are needed, so create a copy and update the archive driver section to the configuration you want to migrate to
cd config
cp config.yaml dest-config.yaml
<edit dest-config.yaml>
Step 3: Test the destination config
Assuming that config is dest-config.yaml:
[user@host aevolume]$ docker compose run anchore-engine /bin/bash
[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} check /config/dest-config.yaml
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Using config file /config/dest-config.yaml
[MainThread] [anchore_engine.subsys.object_store.operations/initialize()] [INFO] Archive initialization complete
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Checking existence of test document with user_id = test, bucket = anchorecliconfigtest and archive_id = cliconfigtest
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Creating test document with user_id = test, bucket = anchorecliconfigtest and archive_id = cliconfigtest
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Checking document fetch
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Removing test object
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Archive config check completed successfully
Step 3a: Test the current config.yaml
If you are running the migration for a different location than one of the anchore engine containers
Same as above but using /config/config.yaml as the input to check (skipped in this instance since we’re running the migration from the same container)
Step 4: Run the Migration
By default, the migration process will remove data from the source once it has confirmed it has been copied to the destination and the metadata has been updated in the anchore db. To skip the deletion on the source, use the ‘–nodelete’ option. it is the safest option, but if you use it, you are responsible for removing the data later.
[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} migrate /config/config.yaml /config/dest-config.yaml
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Loading configs
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration from config: {
"storage_driver": {
"config": {},
"name": "db"
},
"compression": {
"enabled": false,
"min_size_kbytes": 100
}
}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration to config: {
"storage_driver": {
"config": {
"access_key": "9EB92C7W61YPFQ6QLDOU",
"create_bucket": true,
"url": "http://minio-ephemeral-test:9000/",
"region": false,
"bucket": "anchore-engine-testing",
"prefix": "internaltest",
"secret_key": "TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s"
},
"name": "s3"
},
"compression": {
"enabled": true,
"min_size_kbytes": 100
}
}
Performing this operation requires *all* anchore-engine services to be stopped - proceed? (y/N)y
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Initializing migration from {'storage_driver': {'config': {}, 'name': 'db'}, 'compression': {'enabled': False, 'min_size_kbytes': 100}} to {'storage_driver': {'config': {'access_key': '9EB92C7W61YPFQ6QLDOU', 'create_bucket': True, 'url': 'http://minio-ephemeral-test:9000/', 'region': False, 'bucket': 'anchore-engine-testing', 'prefix': 'internaltest', 'secret_key': 'TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s'}, 'name': 's3'}, 'compression': {'enabled': True, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/migration_context()] [INFO] Initializing source object_store: {'storage_driver': {'config': {}, 'name': 'db'}, 'compression': {'enabled': False, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/migration_context()] [INFO] Initializing dest object_store: {'storage_driver': {'config': {'access_key': '9EB92C7W61YPFQ6QLDOU', 'create_bucket': True, 'url': 'http://minio-ephemeral-test:9000/', 'region': False, 'bucket': 'anchore-engine-testing', 'prefix': 'internaltest', 'secret_key': 'TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s'}, 'name': 's3'}, 'compression': {'enabled': True, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration Task Id: 1
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Entering main migration loop
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migrating 7 documents
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/policy_bundles/2c53a13c-1765-11e8-82ef-23527761d060
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/manifest_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/analysis_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/image_content_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/manifest_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/analysis_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/image_content_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration result summary: {"last_state": "running", "executor_id": "3209ad44d7bb:37:139731996518208:", "archive_documents_migrated": 7, "last_updated": "2018-08-15T18:03:52.951364", "online_migration": null, "created_at": "2018-08-15T18:03:52.951354", "migrate_from_driver": "db", "archive_documents_to_migrate": 7, "state": "complete", "migrate_to_driver": "s3", "ended_at": "2018-08-15T18:03:53.720554", "started_at": "2018-08-15T18:03:52.949956", "type": "archivemigrationtask", "id": 1}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] After this migration, your anchore-engine config.yaml MUST have the following configuration options added before starting up again:
compression:
enabled: true
min_size_kbytes: 100
storage_driver:
config:
access_key: 9EB92C7W61YPFQ6QLDOU
bucket: anchore-engine-testing
create_bucket: true
prefix: internaltest
region: false
secret_key: TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s
url: http://minio-ephemeral-test:9000/
name: s3
Note: If something goes wrong you can reverse the parameters of the migrate command to migrate back to the original configuration (e.g. … migrate /config/dest-config.yaml /config/config.yaml)
Step 5: Get coffee!
The migration time will depend on the amount of data and the source and destination systems performance.
Step 6: View results summary
[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} list-migrations
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
id state start time end time from to migrated count total to migrate last updated
1 complete 2018-08-15T18:03:52.949956 2018-08-15T18:03:53.720554 db s3 7 7 2018-08-15T18:03:53.724628
This lists all migrations for the service and the number of objects migrated. If you’ve run multiple migrations you’ll see multiple rows in this response.
Step 7: Replace old config.yaml with updated dest-config.yaml
The system should now be up and running using the new configuration! You can verify with the anchorectl command by fetching a policy, which will have been migrated:
If that returns the content properly, then you’re all done!
Things to Watch for in a Multi-Node Anchore Engine Installation
Before migration:
Ensure all services are down before starting migration
At migration:
Ensure the place you’re running the migration from has the same db access and network access to the archive locations
After migration:
Ensure that all components get the update config.yaml. Strictly speaking, only containers that run the catalog service need the update configuration, but its best to ensure that any config.yaml in the system which has a services.catalog definition also has the proper and up-to-date configuration to avoid confusion or accidental reverting of the config.
Example Process with docker compose
# ls docker-compose.yaml
docker-compose.yaml
# docker compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------------------
aevolumepy3_anchore-db_1 docker-entrypoint.sh postgres Up 5432/tcp
aevolumepy3_anchore-engine_1 /bin/sh -c anchore-engine Up 0.0.0.0:8228->8228/tcp, 0.0.0.0:8338->8338/tcp
aevolumepy3_anchore-minio_1 /usr/bin/docker-entrypoint ... Up 0.0.0.0:9000->9000/tcp
aevolumepy3_anchore-prometheus_1 /bin/prometheus --config.f ... Up 0.0.0.0:9090->9090/tcp
aevolumepy3_anchore-redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
aevolumepy3_anchore-ui_1 /bin/sh -c node /home/node ... Up 0.0.0.0:3000->3000/tcp
# docker compose stop anchore-engine
Stopping aevolume_anchore-engine_1 ... done
# docker compose run anchore-engine anchore-manager objectstorage --db-connect postgresql+pg8000://postgres:mysecretpassword@anchore-db:5432/postgres check /config/config.yaml.new
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_connect": "postgresql+pg8000://postgres:mysecretpassword@anchore-db:5432/postgres", "db_connect_args": {"timeout": 30, "ssl": false}, "db_pool_size": 30, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Using config file /config/config.yaml.new
...
...
# docker compose run anchore-engine anchore-manager objectstorage --db-connect postgresql+pg8000://postgres:mysecretpassword@anchore-db:5432/postgres migrate /config/config.yaml /config/config.yaml.new
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_connect": "postgresql+pg8000://postgres:mysecretpassword@anchore-db:5432/postgres", "db_connect_args": {"timeout": 30, "ssl": false}, "db_pool_size": 30, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Loading configs
[MainThread] [anchore_engine.configuration.localconfig/validate_config()] [WARN] no webhooks defined in configuration file - notifications will be disabled
[MainThread] [anchore_engine.configuration.localconfig/validate_config()] [WARN] no webhooks defined in configuration file - notifications will be disabled
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration from config: {
"compression": {
"enabled": false,
"min_size_kbytes": 100
},
"storage_driver": {
"name": "db",
"config": {}
}
}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration to config: {
"compression": {
"enabled": true,
"min_size_kbytes": 100
},
"storage_driver": {
"name": "s3",
"config": {
"access_key": "Z54LPSMFKXSP2E2L4TGX",
"secret_key": "EMaLAWLVhUmV/f6hnEqjJo5+/WeZ7ukyHaBKlscB",
"url": "http://anchore-minio:9000",
"region": false,
"bucket": "anchorearchive",
"create_bucket": true
}
}
}
Performing this operation requires *all* anchore-engine services to be stopped - proceed? (y/N) y
...
...
...
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration result summary: {"last_updated": "2018-08-14T22:19:39.985250", "started_at": "2018-08-14T22:19:39.984603", "last_state": "running", "online_migration": null, "archive_documents_migrated": 500, "migrate_to_driver": "s3", "id": 9, "executor_id": "e9fc8f77714d:1:140375539468096:", "ended_at": "2018-08-14T22:20:03.957291", "created_at": "2018-08-14T22:19:39.985246", "state": "complete", "archive_documents_to_migrate": 500, "migrate_from_driver": "db", "type": "archivemigrationtask"}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] After this migration, your anchore-engine config.yaml MUST have the following configuration options added before starting up again:
...
...
# cp config/config.yaml config/config.yaml.original
# cp config/config.yaml.new config/config.yaml
# docker compose start anchore-engine
Starting anchore-engine ... done
Migrating Analysis Archive Data
The object storage migration process migrates any data stored in the source config to the destination configuration, if
the analysis archive is configured to use the same storage backend as the primary object store then that data is migrated
along with all other data, but if the source or destination configurations define different storage backends for the
analysis archive than that which is used by the primary object store, then additional paramters are necesary in the
migration commands to indicate which configurations to migrate to/from.
The most common migration patterns are:
Migrate from a single backend configuration to a split configuration to move analysis archive data to an external system (db -> db + s3)
Migrate from a dual-backend configuration to a single-backend configuration with a different config (e.g. db + s3 -> s3)
Migrating a single backend to split backend
For example, moving from unified db backend (default config) to a db + s3 configuration with s3 for the analysis archive .
Anchore stores its internal data in logical ‘buckets’ that are overlayed onto the storage backed in a driver-specific
way, so to migrate specific internal buckets (effectively these are classes of data), use the –bucket option in the
manager cli. This should generally not be necessary, but for specific kinds of migrations it may be needed.
The following command will execute the migration. Note that the –bucket option is for an internal Anchore logical-bucket, not
and actual bucket in S3:
Next, migrate the object data in the analysis archive from the old config (s3 bucket ‘analysisarchive’ to the new config
(s3 bucket ’newanchorebucket’):
The default object store driver is the PostgreSQL database driver which stores all object store documents within the PostgreSQL database.
A component of the object store driver is the archive_document. When the default object store driver is used, as opposed to a user configuring a S3 bucket, this is the location where image SBOMs, vulnerability scans, policy evaluations, and reports are stored.
Compression is not supported for this driver since the underlying database will handle compression.
There are no configuration options required for the Database driver.
The embedded configuration for anchore enterprise includes the default configuration for the db driver.
The S3 driver supports compression of documents. The documents are JSON formatted and will see significant reduction in
size through compression there is an overhead incurred by running compression and decompression on every access of these
documents. Anchore Enterprise can be configured to only compress documents above a certain size to reduce unnecessary
overhead. In the example below any document over 100kb in size will be compressed.
Authentication
Anchore Enterprise can authenticate against the S3 service using one of two methods:
Amazon Access Keys
Using this method an Access Key and Secret Access key that have access to read and write to the bucket. Parameters:
access_key and secret_key
Inherit IAM Role
Anchore Enterprise can be configured to inherit the IAM role from the EC2 or ECS instance that Anchore
Enterprise is running on or is provided via Kubernetes service account. When launching the EC2 instance that will run
Anchore Enterprise you need to specify a role that includes the
ability to read and write from the archive bucket. To use IAM roles to authenticate the access_key and secret_access
configurations should be replaced by iamauto: True
Parameters: iamauto
S3 Endpoint and Bucket
url: (required if region not set) A URL to set to reach an S3-API compatible service if you are not using actual Amazon S3. If the URL is configured, the region config value is ignored.
region: (required if URL not set) The AWS region that is the primary bucket host (). If you are not using actual S3, this is probably not necessary unless your S3-compatible service requires it. If the ‘URL’ configured, this field is ignored.
bucket: (required) The name of the S3 bucket that Anchore will use for storing data.
create_bucket: (default: false) Try to create the bucket if it doesn’t already exist. This should be used very sparingly. For most cases, you should pre-create the bucket so that it has the permissions you desire, then set this to false.
Storing Object Store API key in a Kubernetes Secret
You can configure your object store API key to be pulled from a kubernetes secret as follows:
Anchore Enterprise supports 7 types of subscriptions:
Tag Update
Policy Update
Vulnerability Update
Analysis Update
Alerts
Repository Update
Runtime Inventory
Enabling some of these will generate a notification when the event is triggered while others may have a more significant impact on the system.
Note
Please read carefully what each subscription watches/manages and what effect enabling them may have on the overall deployment.
Tag Update
Granularity
Per Image Tag
Notification Generated
Yes
Background Process
Yes
Default Timer Frequency
every 60 min
Default State
Disabled (Unless the Tag is added by AnchoreCTL)
Other Considerations
Adds new tag/digest pairs to the system
When the tag_update subscription is enabled, a background process, called a “watcher”, will periodically query the repository for any new image digests with the same tag.
For each new image digest found:
it will be pulled into the catalog and analyzed
a Tag Update Notification will be triggered.
Policy Updates
Granularity
Per Image Tag
Notification Generated
Yes
Background Process
Yes
Default Timer Frequency
every 60 min
Default State
Disabled
Other Considerations
None
This class of notification is triggered if a Tag to which a user has subscribed has a change in its policy evaluation status. The policy evaluation status of an image can be one of two states: Pass or Fail. If an image that was previously marked as Pass changes status to Fail or vice-versa then the policy update notification will be triggered.
The policy status of a Tag may be changed by a number of methods.
Change to policy
If an policy was changed, for example adding, editing or removing a policy check, then the policy status of an image may be effected. For example adding policy rule that denylists a specific package that is present in a given Tag may cause the Tag’s policy status to move to Fail.
Changes to Allowlist
If a allowlist is changed to add or remove a CVE then this may cause a policy status change. For example if an image contains a package that is vulnerable to Crticial Severity CVE-2017-9999 then this image may fail in it’s policy evaluation. If CVE-2017-9999 is added to a CVE Allowlist that is mapped to the subscribed Tag then the policy status may change from Fail to Pass.
Change in Policy / Allowlist Mapping
If the policy mapping is changed then a new policy or allowlist may be applied to an image which may change the status of the image. For example changing the mapping to add a more restrictive policy may change an Tag’s status from Pass to Fail.
Change in Package or Vulnerability Data
Some policy checks make use of data from external feeds. For example vulnerability checks use CVE data feeds. Changes in data within these feed may change the policy status, such as adding a new CVE vulnerability.
Vulnerability / CVE Update
Granularity
Per Image Tag
Notification Generated
Yes
Background Process
Yes
Default Timer Frequency
every 4 hours
Default State
Disabled
Other Considerations
None
This class of notification is triggered if the list of CVEs or other security vulnerabilities in the image changes.
For example, a user was subscribed to the library/nginx:latest tag. On the 12th of September 2017 a new vulnerability was added to the Debian 9 vulnerability feed which matched a package in the library/nginx:latest image, triggering a notification.
Based on the changes made by the upstream providers of CVE data (operating system vendors and NIST) CVEs may be added, removed or modified – for example a CVE initially marked as severity level Unknown may be upgraded to a higher severity level.
Note: A change to the CVE list in a Tag may not trigger a policy status change based on the policy rules configured for an image. In the example above the CVE had an unknown severity level which may not be tested by the policy mapped to this image.
Analysis Update
Granularity
Per Image Tag
Notification Generated
Yes
Background Process
No
Default Timer Frequency
n/a
Default State
Enabled
Other Considerations
None
This class of notification is triggered when an image has been analyzed. Typically, this is triggered when a new Tag has been added to the catalog.
A common use case for this trigger is to alert an external system that a new Tag was added and has been successfully analyzed.
Forcing a re-analysis on an existing image will also cause this notification to be generated.
Alerts
Granularity
Per Image Tag
Notification Generated
No
Background Process
Yes
Default Timer Frequency
10 minutes
Default State
Disabled
Other Considerations
Enabling this Subscription may be resource intensive as frequent policy evaluations will occur
The UI and API use stateful alerts that will be raised for policy violations on tags to which you are subscribed for alerts.
This raises a clear notification in the UI to help initiate the remediation workflow and address the violations via the remediation feature.
Once all findings are addressed the alert is closed, allowing an efficient workflow for users to bring their image’s into compliance with their policy.
Repository Update
Granularity
Per Repository
Notification Generated
No
Background Process
Yes
Default Timer Frequency
60 seconds
Default State
Disabled
Other Considerations
Adds all the tags found in a repository to the system
This subscription, when enabled, will query the provided repository for any new tags. Any tag not already managed with in Anchore, will be added.
This subscription also provides the ability to determine if the tag_update subscription should be enabled for any new tag added to Anchore.
Please see Repositories for more information.
Please Note: Enabling this subscription may add a large number of tags to the system.
Runtime Inventory
Granularity
Per Runtime Inventory Context (Cluster/Namespace)
Notification Generated
No
Background Process
Yes
Default Timer Frequency
2.5 minutes
Default State
Disabled
Other Considerations
Adds all the images found in the Context to the system
This subscription, when enabled, will find any newly reported images from the runtime inventory and add them to Anchore to be analyzed.
Please Note: Enabling this subscription may add a large number of tags to the system.
4.21 - User Authentication
Overview
Anchore Enterprise offers Authentication via HTTP basic auth, SAML/SSO, LDAP and API Keys.
For more information about specific types of Anchore authentication, see the following topics:
API keys, or Application Programming Interface keys, are alphanumeric codes used to authenticate and control access to web-based services or APIs (Application Programming Interfaces). These keys serve as unique identifiers for developers or applications seeking permission to interact with Anchore Enterprise. API keys are commonly employed in software development to manage and secure the flow of data between different applications, allowing authorized access while preventing unauthorized usage. They play a crucial role in ensuring the integrity, security, and controlled usage of APIs, acting as a form of digital credentials for developers to connect their applications to external services.
Generating API Keys
A system user can generate an API key for self use. Some users have specific RBAC roles (ie account-user-admin) that allow management of API keys for other system users.
For more details on generating and managing API keys, please refer to this section: Generating API keys
Generating API keys as an SAML (SSO) user
API keys for SAML (SSO) users are disabled by default.
To enable API keys for SAML users, please update your helm chart values file with the following:
API keys are an additional authentication mechanism for SAML (SSO) users that bypasses the authentication control of the IDP.
When access has been revoked at the IDP, it does not automatically disable the user or revoke all API keys for the user.
Note
API keys are an additional authentication mechanism for SAML (SSO) users that bypasses the authentication control of the IDP. When access has been revoked at the IDP, it does not automatically disable the user or revoke all API keys for the user. Therefore, when access has been revoked for a user, the system administrator is responsible to manually delete the Anchore User or revoke any API key which was created for the user.
Using API Keys
API keys are authenticated using basic auth. In order to use API keys you need to use a special username _api_key and the password is the value that was output when you created the API key.
API Keys generally inherit the permissions and roles of the user they were generated for, but there are certain operations you cannot perform using API keys regardless of which user they were generated for:
You cannot Add/Edit/Remove Users and Credentials.
You cannot Add/Edit/Revoke API Keys.
4.21.2 - User Credential Storage
Overview
All user information is stored in the Anchore DB. The credentials can be stored as plaintext or in a hashed form using the Argon2 hashing algorithm.
Hashed passwords are much more secure, but can be more computationally expensive to compare. Hashed passwords cannot be used for internal service communication since they cannot be reversed. Anchore
provides a token based authentication mechanism as well (a simplified Password-Grant flow of Oauth2) to mitigate the performance issue, but it requires that
all Anchore services be deployed with a shared secret in the configuration or a public/private keypair common to all services.
Passwords
The configuration of how passwords are stored is set in the user_authentication section of the config.yaml file
and must be consistent across all components of an Anchore Enterprise deployment. Mismatch
in this configuration between components of the system will result in the system not being able to communicate internally.
user_authentication:
hashed_passwords: true|false
For all new deployments using the Enterprise Helm chart, hashed_passwords is defaulted to true.
All helm upgrades will carry forward the previous hashed_passwords setting.
NOTE: When the configuration is set to enable hashed_passwords, it must also be configured to use OAuth. When OAuth is not configured in the system, Anchore must be able to use HTTP Basic authentication between internal services and thus requires credentials that can be read.
Bearer Tokens/OAuth2
If Anchore is configured to support bearer tokens, the tokens are generated and returned to the user but never persisted in the database.
Users must still have password credentials, however. Password persistence and protection configuration still applies as in the Password section above.
Configuring Hashed Passwords and OAuth
NOTE: Password storage configuration must be done at the time of deployment, it cannot be modified at runtime or after installation with an existing DB since
it will invalidate all existing credentials, including internal system credentials and the system will not be functional. You must choose the mechanism
at system deployment time.
Set in config.yaml for all components of the deployment:
Option 1: Use a shared secret for signing/verifying oauth tokens
Option 2: Use a public/private key pair, delivered as pem files on the filesystem of the containers anchore runs in:
user_authentication:
oauth:
enabled: true
hashed_passwords: true
keys:
private_key_path: <path to private key pem file>
public_key_path: <path to public key pem file>
Using environment variables with the config.yaml bundled into the Anchore provided anchore/enterprise image is also an option.
NOTE: These are only valid when using the config.yaml provided in the image due to that file referencing them explicitly as replacement values.
ANCHORE_AUTH_SECRET = the string to use as a secret
ANCHORE_AUTH_PUBKEY = path to public key file
ANCHORE_AUTH_PRIVKEY = path to the private key file
ANCHORE_OAUTH_ENABLED = boolean to enable/disable oauth support
ANCHORE_OAUTH_TOKEN_EXPIRATION = the number of seconds a token should be valid (default is 3600 seconds)
ANCHORE_OAUTH_REFRESH_TOKEN_EXPIRATION = the number of second a refresh token is valid (default is 86400 seconds)
ANCHORE_AUTH_ENABLE_HASHED_PASSWORDS = boolean to enable/disable hashed password storage in the anchore db instead of clear text
4.21.3 - LDAP
Overview
The Lightweight Directory Access Protocol (LDAP) is a standardized and widely-used client-server protocol for accessing directory information, and can be enabled in Anchore Enterprise Client to authenticate users against an existing directory server.
In order to configure Anchore Enterprise Client for use with LDAP, the requisite information for connecting and authenticating with an LDAP directory server must first be provided by an administrator. For the purposes of determining what users can see and do once they are logged in, the administrator must also create one or more account association entries, called user mappings.
When an LDAP user authenticates, the Anchore Enterprise account associated with their session is determined by the first user mapping containing a search filter that matches the information in their LDAP record. LDAP authentication will fail if no matches are found, if the associated account is disabled, or if the user’s login credentials are incorrect.
The following sections in this document describe how to configure the Anchore Enterprise Client for use with an LDAP directory server, how to add user mappings, and how to log in to the application as an LDAP user.
Server Connection Properties
Administrators can provide the information used to connect Anchore Enterprise Client to an LDAP server from the LDAP sidetab in the Configuration view. Please note that this sidetab is not visible to non-administrative users.
The connection property fields shown in this view are described below:
Property
Description
Server URI
The ldap:// or ldaps:// URI of the LDAP directory server to query.
Manager DN
The distinguished name (DN) of an LDAP directory manager that the Anchore Enterprise Client can use to perform further queries about LDAP users during login. The directory manager is typically a privileged server administrator who, once authenticated, can access the LDAP record of any user intended to access the application.
Manager Password
The password associated with the Manager DN.
Base DN
The relative distinguished name in the LDAP directory tree hierarchy under which queries about users should be performed.
After you have entered the required connection properties, click the Save button to store them. Once stored, you can click the Test button to verify that the application can authenticate with the LDAP server using the details you’ve provided.
Note: Clicking Save when no values are provided in any of the fields will disable LDAP in the application and prevent LDAP from being displayed as an authentication option on the login screen.
User Mappings
LDAP user mappings contain search filters that unite the results of searches made against the data attributes of LDAP records with account information stored in Anchore Enterprise.
When an LDAP user submits their credentials on the login page, the first match encountered will provide Anchore Enterprise Client with an associated Anchore Enterprise account that is used to define the scope of what the user can see and do once they are fully authenticated.
If a match is detected, the submitted password is then validated against the one stored inside the matched LDAP record. If the password is correct and the associated Anchore Enterprise account is not suspended, the user will be successfully logged in. If no match is found or the password is incorrect, authentication will fail.
Adding a User Mapping
User mappings can be created by administrators from inside an account within the Accounts sidetab in the Configuration view, or from the LDAP sidetab in the area below the server connection properties form.
To add a new user mapping containing an LDAP search filter, click the Add New LDAP User Mapping button—or if no user mappings are currently defined, click the Let’s add one! button in the empty table.
You will be presented a dialog, similar to the one shown below, where you can provide an LDAP search filter:
LDAP Search Filters
The LDAP search filter in each mapping provides the criteria for associating that mapping with an Anchore Enterprise account. For example:
uid=$USERNAME
In the above example, the user mapping requires that the uid (user ID) attribute in an LDAP record matches the data represented by the $USERNAME token.
The =$USERNAME string is a required entry, and the actual value of the token resolves to whatever value the user enters in the Username field when they log in to Anchore Enterprise Client.
In Microsoft® Active Directory® (AD) implementations that support the LDAP protocol, the sAMAccountName attribute is the broad equivalent of uid:
sAMAccountName=$USERNAME
Note: The submitted value of $USERNAME should always correspond to an attribute with a unique value within the LDAP user record, or one that is unique in combination with other criteria. In Active Directory, the uniqueness of sAMAccountName is enforced, whereas this may not be true for uid (which is an optional attribute in AD).
Additional filter criteria beyond the user identity can be provided to assert granular control over user access. The following examples describe filters with narrower scope:
A detailed summary of the syntax and formula of LDAP search filters is beyond the scope of this document, however RFC 1558 provides a comprehensive description of how these entries are structured.
Mapping Order
By default, mappings are evaluated in priority order, with new entries being stored at the lowest priority. It can be challenging to infer the exact order of all mappings when they are spread across multiple accounts, so the table listing all current mappings the LDAP sidetab shows the priority of every item and includes the account with which they are associated. Example:
From here you can move row entries to a higher or lower order of precedence by clicking down on a hotspot () and then dragging the row up or down the list.
The priority order of user mappings determines the order in which search filters are evaluated when a user logs in. The first mapping to successfully locate an LDAP user record that matches the $USERNAME and any other criteria in its search filter will be used to determine the Anchore Enterprise account association for that user.
Once a user is located, subsequent mapping entries will be ignored, regardless of (possibly narrower) specificity, as only priority order matters here.
Test Mapping Behavior
You can evaluate the behavior of your user mappings by entering $USERNAME data (for example, the uid of a user) in the Check $USERNAME Against LDAP Mappings search field.
If an LDAP record is located that matches the search filter criteria of a mapping, you’ll be informed of which mapping provided the match, the associated Anchore Enterprise user, and the distinguished name of the user whose LDAP record was returned.
Login With LDAP Credentials
If a set of valid LDAP server connection properties have been stored by an administrator, the LDAP authentication option is activated in the application login view, in addition to the Default option of authenticating against the user records stored in Anchore Enterprise:
The value entered in the Username field will be used by the application to populate the $USERNAME token when evaluating each user mapping. The value entered in the Password field will be used to authenticate the matched user with the LDAP directory server.
Once these operations have completed, and providing the account associated with the mapping is not disabled, the user will be logged in.
4.21.4 - SSO
Overview
Anchore Enterprise can be configured to support user login to the UI using identities from external identity providers
that support SAML 2.0. Anchore never stores any credentials for the users, only their usernames
and Anchore permissions. All UI access is gated through a user’s valid login into the identity provider. Anchore uses the external
provider to verify username identity and initialize a username, account, and roles on first login for a new user. Once a
user’s identity is initialized in Anchore, the Anchore administrator can manage user permissions by managing the roles
associated with the user’s identity in Anchore itself.
Terms
SAML Terms:
Identity Provider (IDP) - The service that stores the identity database and provides identity and authentication services to Anchore.
Service Provider (SP) - The service providing resources to the end user, in this case, the Anchore Enterprise deployment.
Assertion Consumer Service (ACS) - The consumer of SAML assertions generated by the Identity Provider. For Anchore Enterprise,
the UI proxies the SAML assertion to the Anchore Enterprise API service which consumes it, but the UI is the network
endpoint the user’s browser interacts with.
Anchore Terms:
Native User - A user that exists in the Anchore DB and has login credentials (passwords).
SAML User - A user that exists in the Anchore DB only with a username and permissions, but no credentials. This prevents any username
conflicts. SAML users will also be associated with a single Identity Provider. This prevents overlapping usernames from multiple Identity Providers gaining access to unexpected users or accounts.
How SAML integration works
When a user logs into the Anchore Enterprise UI, they can choose which Identity Provider to authenticate with. User credentials are never passed to Anchore Enterprise. Instead, other information about the user is passed from the Identity Provider to Anchore. Some information used by Anchore during login include the username, authenticating Identity Provider, associated account, and initial RBAC permissions. After the initial login, RBAC permissions can be adjusted for this user directly by an Anchore administrator. This allows the Anchore administrator the ability to control access of Anchore users without having to gain access to the corporate IDP system.
Dynamic SAML User Provisioning
The first time a SAML User logs into Anchore, if the username is not found within the Anchore DB, a record will be automatically created for the user. If the user’s associated account is not found within the Anchore DB, an account record will also be automatically created at this time.
This is often referred to as Just In Time Provisioning (JIT).
Explicit Creation of SAML Users
An Anchore administrator has the ability to create other users with administrator privileges. This includes Native and SAML Users. When creating a SAML Administrator User, the username and the Identity Provider’s name will be required. Upon SSO login by this new user, it will be associated with account admin and have all the permissions of an Anchore administrator.
A global configuration mode is also available if SSO is the preferred method of login, but the Anchore administrator would like
explicit control over which users can gain access to Anchore Enterprise.
When this configuration mode is set to true, any users who have permissions to create other users, will now have the ability to explicitly create SAML Users. As stated above, when creating a SAML User, the username and the Identity Provider’s name will be required. In addition, an RBAC role will also be needed for each SAML User creation. Upon SSO login by this new user, it will be associated with the account it was created in and have all the RBAC permissions provided for it.
When this configuration mode is set to true, SSO logins are only permitted within Anchore for users who have existing SAML user records fround in Anchore DB.
When explicitly creating SAML Users, the account and RBAC role provided will take precedent over any default values or IDP Attributes which may be configured in the SAML Configuration described below. For more information, please see Mapping.
Note: Any users that have previously authenticated via SSO will continue to have access regardless of the configuration mode setting. If you wish to prevent future access when setting sso_require_existing_users to true, simply delete the user record in Anchore.
SSO Login Validation
During subsequent SSO logins, Anchore will find an existing user record in the Anchore DB. The following information will be validated:
The user record must be a SAML User. If the user was previously configured as a Native User and you want to convert it to a SAML User, simply delete the user record in Anchore and have the user log in again via SSO.
The user record must be authenticating from the same Identity Provider. If the user has been changed to authenticate via a different Identity Provider, simply delete the user record in Anchore and have the user log in again via SSO.
Configuration Overview
In order to use SAML SSO, the Anchore Enterprise deployment must:
Have Oauth enabled. This is required so that Anchore can issue bearer tokens for subsequent API usage by the UI to the system APIs.
Using hashed passwords is optional but highly recommended. See User Authentication
for more information on configuring OAuth and hashed password storage.
Be able to reach the IDP login URL from the user’s browser.
Be able to reach the metadata XML endpoint in the IDP (if using url).
Configuration of SAML SSO is done via API/UI operations but requires configuration both in your Identity Provider and in Anchore.
In the IDP:
Must support HTTP Redirect binding
Should support signed assertions and signed documents
Must allow unsigned client requests from Anchore
Must allow unencrypted requests and responses
Anchore IDP Configuration Fields are as follows.
Name - The name to use for this configuration. It will be part of the UI’s /service/auth/sso/ route as well as
the /saml/sso/ and /saml/login/ routes that are used to implement SSO.
Enabled - Whether auth via this configuration is allowed.
ACS HTTPS Port - HTTPS port if non-standard. If omitted or -1, 443 is used.
SP Entity ID - The identity for the Anchore system to present when requesting auth from the SAML IDP. This is typically
a URL identifying the Anchore deployment.
ACS URL - The URL to which SAML Responses should be sent from the IDP. For UI usage, this should be the hostname and
port of the UI deployment and a path: /service//sso/auth/{idp config name}.
Default Account - If set, this is the account that SSO users will be initialized to be members of upon sign-in the
first time. This property or IDP Account Attribute must be set.
Default Role - The role that will be granted to new users on first sign-in. Use this setting to apply a
consistent role to all users if you do not want that data imported from the IDP. This property or IDP Role Attribute must be set.
IDP Account Attribute - A SAML assertion attribute name from the SAML responses that Anchore will use to determine
the account name to initialize the user into. If the account does not exist, it is created. For more information on the
initialization process see Initializing User Identities below. This property or Default Account must be set.
IDP Username Attribute - A SAML assertion attribute name from the SAML responses that Anchore will use to determine
the username for the anchore identity. This is optional and typically will not need to be set. If omitted, the SAML Subject is used and this should meet most needs.
IDP Metadata URL - URL to retrieve the IDP metadata xml from. This value is mutually exclusive with IDP Metadata XML,
so only one of the two properties may be specified.
IDP Metadata XML - Raw XML for the IDP metadata. This value is mutually exclusive with IDP Metadata URL, so only one
of the two properties may be specified.
IDP Role Attribute - The SAML assertion attribute from the SAML responses that Anchore will use to initialize a user’s
roles. This may be a multi-value property. This property or Default Account must be set.
Require Signed Assertions - If true, require the individual assertions in the SAML response to be signed by the IDP.
Require Signed Response - If true, require the SAML response document to be signed.
Using SAML Attributes to Initialize Users and Account in Anchore
The properties of the user including the account it belongs to, the roles it has in that account as well as any other accounts the user has role access to
are all initialized based on the combination of the Anchore IDP configuration and the SAML response presented by the IDP at the user’s first login.
See Mapping for more information on that process and how the configuration works.
Deleting SAML SSO Configuration
An Anchore administrator has the ability to create, modify, and delete the SAML Configuration. During deletion of the SAML Configuration, any user that was created with this Identity Provider, either dynamically or explicitly, will also be deleted.
4.21.4.1 - Mapping SSO Identities into Anchore
Overview of Mapping External Identities into Anchore
Anchore SSO support provides a way to keep users’ credentials centrally stored in a corporate Identity
Provider and outside of the Anchore deployment. Anchore’s SSO approach uses the external identity
store to authenticate a user and bootstrap its permissions using Anchore’s RBAC system. For each login, the user’s
identity is verified against the external provider but once the identity is initialized inside the Anchore deployment, its
access to resources is controlled by the Anchore Enterprise RBAC system. This approach allows Anchore admins to manage user
access without having to require administrator access to a corporate or central IT identity system while still being able
to leverage that system for defining identity and securing access credentials.
The identity mapping process has two distinct phases:
Initial identity bootstrap - Occurs on the first login of a user to Anchore causing dynamic construction of an Anchore user record.
Identity verification and assertion validation - Validates the administrators requirements against the external identity record on each login.
Defining the Username
By default, with SSO, the SAML Assertion’s “Subject” attribute is used to define the username. Using the subject is the
right solution in most situations, but in extreme cases it may be necessary to provide the username in some other form via
attributes in the SAML Response. This behavior can be configured by setting: idp_username_attribute in the SAML Configuration
within Anchore. This should only be used when the subject either cannot be used due to it being a transient ID as configured by the
IDP itself, or you want the username to map to some form other than the IDP’s username, email, or persistent ID.
If the idp_username_attribute is set to an attribute name, and that attribute is not found or has no value in the SAML
Response presented during login, then that user login will be rejected by Anchore.
If idp_username_attribute is an empty string or null, then the SAML Response’s subject attribute is used as the username.
This is the default behavior.
Defining the Account the User Belongs To
In Anchore, all users belong to an Account. When an SSO user logs into Anchore UI for the first time, the identity is initialized
with the username (as defined above), but the account to which the user belongs is configurable via a separate pair of configuration
properties in the SAML Configuration within Anchore. These configuration properties are mutually exclusive.
idp_account_attribute - If set in the SAML Configuration, this attribute must be found within the
SAML Response during each login for every user. The attribute value received is the ‘account name’. It must also be valid.
A valid value must be greater than three characters and must not be a reserved account name such as ‘admin’ or ‘anchore-system’
If the attribute is not found within the SAML Response or the value is not valid, the login is rejected.
default_account - If set in the SAML Configuration, it’s value is the account all users that login
from this IDP will be assigned.
In both cases, on the initial login by the user, if the account does not already exist within Anchore, an external account with that name is created.
Defining the User’s Initial Roles
In Anchore, all users are allowed to have one or more Roles that describe a set of access permissions. Roles are assigned to the user via a separate pair of configuration properties in the SAML Configuration within Anchore. These configuration properties are mutually exclusive.
idp_role_attribute - If set in the SAML Configuration, the attribute must be found within the
SAML Response during each login for every user. The attribute value received is one or more ‘role name’. The value must also be valid.
If the attribute is not found within the SAML Response or the value is not valid, the login is rejected.
default_role - If set in the SAML Configuration, it’s value will be the single role set for all users that login with this IDP.
During a user’s first login, this role will be set on the account during user identity initialization. On subsequent logins for this user, the value will be ignored.
Revoking SSO User Access
Disable the Anchore account. Any user, SSO or otherwise, that is a member of a disabled account cannot log in or perform API operations.
If using idp_account_attribute or idp_role_attribute, simply remove or zero that attribute at the IDP for that user or group.
All affected users will no longer be able to log in to Anchore.
Changing the Anchore SAML Configuration
Initialization of identities and roles occurs on the user’s first login. Once initialized, the configuration must match the
SAML Response presented during each login for the user to log in.
Thus, changes to the SAML Configuration within Anchore may affect subsequent logins for your users.
For instance, if you change the SAML Configuration within Anchore to start using attributes
instead of defaults, a user’s SAML Response will need to contain the same attributes. Failure to find the correct attribute(s) with valid values will prevent the user’s login.
Example SSO configurations
Anchore and an external Identity Provider
Here are examples for both Okta and KeyCloak that provide simple defaults and identity mappings.
Because the account is from an attribute, [email protected] might have ‘primary_group’ = [‘security_engineers’] and
thus be initialized in a different account in Anchore.
4.21.4.1.1 - KeyCloak SAML Example
Configuring SAML SSO for Anchore with KeyCloak
The JBoss KeyCloak system is a widely used and open-source identity management system that supports integration with applications via SAML and OpenID Connect. It also can operate as an identity broker
between other providers such as LDAP or other SAML providers and applications that support SAML or OpenID Connect.
The following is an example of how to configure a new client entry in KeyCloak and configure Anchore to use it to permit UI login by KeyCloak users that are granted access via KeyCloak configuration.
Configuring KeyCloak
Anchore supports multiple IDP configurations, each given a name. For this example we’ll choose the name “keycloak” for our configuration.
This is important as that name is used in several URL paths to ensure that the correct configuration is used for validating responses,
so make sure you pick a name that is meaningful to your users (they will see it in the login screen) and also that is url friendly.
Some config choices and assumptions specifically for this example:
Let’s assume that you are running Anchore Enterprise locally. Anchore Enterprise UI is available at: https://localhost:3000. Replace with the appropriate url as needed.
We’re going to choose keycloak as the name of this saml/sso configuration within Anchore. This will identify the specific configuration and is used in urls.
Based on that, the Single-SignOn URL for this deployment will be: https://localhost:3000/service/sso/auth/keycloak
Our SP Entity ID will use the same url: http://localhost:3000/service/sso/auth/keycloak
Assertion Consumer Service Redirect Binding URL - http://localhost:3000/service/sso/auth/keycloak
Save the configuration
Download the metadata xml to import into Anchore
Select ‘Installation’ tab.
Select Format
Keycloak <= 5.0.0
Select Format Option - SAML Metadata IDPSSODescriptor
Keycloak 6.0.0+
Select Format Option - Mod Auth Mellon files
Unzip the downloaded .zip and locate idp-metadata.xml
Download or copy the XML to save in the Anchore configuration
Configure Anchore Enterprise to use the KeyCloak
You’ll need the following information from keycloak in order to configure the SAML record within Anchore:
The name to use fo the configuration, in this example keycloak
Metadata XML downloaded or copied from the previous section
In the Anchore UI, create an SSO IDP Configuration:
Login as admin
Select “Configuration” Tab on the top
Select “SSO” on the left-side menu
Click “Let’s Add One” in the configuration listing
Enter the values:
Name: “keycloak” - This is the name of the configuration and will be referenced in login and sso URLs, so we use the value chosen at the beginning of this example
Enabled: True - This controls whether or not users will be able to login with this configuration. We’ll enable it for the example but can disable later if no longer needed.
ACS HTTPS Port: -1 or 443 - This is the port to use for HTTPS to the ACS (Assertion Consumer Service, in this case the UI). It is only needed if you need to use a non-standard https port
SP Entity ID: http://localhost:3000/service/sso/auth/keycloak (NOTE: this must match the Client ID you used for the Client in the KeyCloak setup
Default Account: keycloakusers for this example, but can be any account name (existing or not) that you’d like the users to be members of. See Mappings for more information on how this
Default Role: read-write for this example so that the users have full access to the account to analyze images, setup policies, etc.
IDP Metadata XML: Paste the downloaded or copied XML from KeyCloak in step 4.3 above
Require Signed Assertions - Select off
Require Signed Response - Select on
Save the configuration
You should now see a ‘keycloak’ option in the login screen for the Anchore Enterprise UI. This will redirect users to login to the KeyCloak instance for their username/password and will create a new user in Anchore in the keycloakusers account with read-write role.
4.21.4.1.2 - Microsoft Entra ID SAML Example
Configuring SAML SSO for Anchore with Microsoft Entra ID (formerly Azure Active Directory)
Azure Enterprise Applications allow identities from an Azure Directory to be federated via Single Sign On (SSO) to other applications. In doing so Azure is acting as a SAML Identity Provider (IdP) that can be used with Anchore Enterprise as a SAML Service Provider (SP).
Configuring Azure
Anchore supports multiple IDP configurations, each given a name. For this example we’ll choose the name “azure” for our configuration.
This is important as that name is used in several URL paths to ensure that the correct configuration is used for validating responses,
so make sure you pick a name that is meaningful to your users (they will see it in the login screen) and also that is url friendly.
Some config choices and assumptions specifically for this example:
Let’s assume that you are running Anchore Enterprise locally. Anchore Enterprise UI is available at: https://anchore.example.com. Replace with the appropriate url as needed.
We’re going to choose azure as the name of this saml/sso configuration within Anchore. This will identify the specific configuration and is used in urls.
Based on that, the Single-SignOn URL for this deployment will be: https://anchore.example.com/service/sso/auth/azure
Our SP Entity ID will use the same url: https://anchore.example.com/service/sso/auth/azure
Type “Enterprise Applications” into the search bar in the top middle of the screen and click on it.
Click on Create your own application near the upper left.
Select SAML as the single sign-on method.
From your new Azure AD Enterprise Application select Single sign-on.
Select Edit Basic SAML Configuration (section 1).
For both Identifier (Entity ID) & Reply URL (Assertion Consumer Service URL) enter the URL for your Anchore deployment along with: “/service/sso/auth/azure”. The word “azure” can be customized. This will become the name of the SSO configuration that users will see each time they login via SSO. Example URL: https://anchore.example.com/service/sso/auth/azure (Make note of the URL for upcoming steps) - Click Save when finished
Copy the “App Federation Metadata Url” found in section 3. This will be used in the next section.
Configure Users and Groups that will be allowed to login to Anchore.
Select Users and groups under Manage.
Click Add user/group.
Click on the hyper-linked “None Selected”.
Select/Check users and/or groups who will be granted access to login to Anchore.
Press Select when finished.
Configure Anchore Enterprise to use the Azure Identity Provider
Ensure you have a non-admin Account created in Anchore. In this example I am creating an account named “saml” and setting it as the default account to create users who use this SSO configuration but do not already exist in Anchore.
Login to Anchore and navigate to System, SSO and click on Add new SAML IDP.
Enter the values (Note that all values can be changed except Name):
Under “Name” enter “azure” or whatever you choose to name the SSO integration.
Select the account created or selected in step 1 for “Default Account”.
Select the appropriate role for “Default Role”.
Paste the Federation Metadata URL from the previous section into “IDP Metadata URL”.
Uncheck “Require Signed Response?”.
Save the configuration, configuration is complete when you see a login with ‘azure’ option on the login screen.
Users can now log in to your Anchore deployment using this Azure endpoint.
See: Mapping Users and Roles in SSO for more information on using the account
and role defaults, IDP attribute values and understanding how identities are mapped into Anchore’s RBAC system.
Using Azure Groups with Anchore User Groups
User groups in Anchore can grant users access to multiple Anchore accounts with varying/selectable access levels in each.
In this example we have two teams within the organization named TeamA and TeamB. We will create a group in Azure for each team that will be sent to Anchore upon SAML login where each team will have their own Anchore account. We can optionally grant each team read-only access to the other teams account.
To map groups from Azure to Anchore:
Create an account in Anchore for TeamA and TeamB.
In Azure create or add desired groups to the “Users and Groups” section of your Anchore SSO Enterprise Application.
In Anchore create (a) user group(s) matching group names from Azure. Please note that the name of a user group cannot be changed but its contents can be changed at any time. For the sake of this example add both the TeamA and TeamB accounts. Grant the TeamA user group full control to the TeamA account and read-only to the TeamB account. When adding the user group for TeamB grant full-admin access to the TeamB account and read-only access to the TeamA account.
In your Azure Enterprise Application under Single Sign On select edit on Attributes and Claims, section 2.
Click on Add a group claim.
For “Which groups associated with the user should be returned in the claim” Select “Groups assigned to the application”
For Source attribute select “Cloud-only group display names”
At this point when logging in via azure SSO a user will either land in the Default Account and need to switch account from the drop down in the upper right or if an IDP Account Attribute was configured to initially put them in the desired account they would only need to switch accounts appropriately.
4.21.4.1.3 - Okta SAML Example
Configuring SAML SSO for Anchore with Okta
Some config choices and assumptions specifically for this example:
Anchore UI endpoint: http://localhost:3000. Replace with the appropriate url as needed.
IDP Config Name: okta. This will identify the specific configuration and is used in urls, and can be any url-safe string you’d like.
The Single Sign-on URL (also called the Assertion Consumer Service/ACS URL) for this deployment will be: http://localhost:3000/service/sso/auth/okta.
This is constructed with the UI endpoint and path /service/sso/auth/{IDP Config Name}
Our SP Entity ID will use the same url: http://localhost:3000/service/sso/auth/okta. This could be different but for simplicity we use the same value.
Configure Okta: Add an Application
See Okta SAML config to see how to create a new
application authorization server. The following steps are used during specific steps of that walk-thru
In step #6
Single sign on URL, this is the URL Okta will direct users to. This must be a URL in the Anchore Enterprise UI
based on the name you chose for the configuration. In our example: http://localhost:3000/service/sso/auth/okta
Set the Use this for Recipient URL and Destination URL checkbox as well.
a1. Set the Audience URI(SP Entity ID) to a URI that will identify the anchore installation. This can be the same as
the single-sign-on URL for simplicity. We’ll need to enter this value later in the Anchore config as well.
Leave Default RelayState empty
Name ID format can be left “Unspecified” or set to an email or username format.
Choose the application username that makes sense for your install. Anchore can support a regular username or email address for the usernames.
In step #7, these attribute statements are not required but can be set. This is, however, where you can set additional
attributes that you want Anchore to use to initialize the user’s Anchore account or permission set. Later in the SAML Configuration
you can specify attributes that Anchore will look for to extract the account name and roles for initializing a user that doesn’t yet exist in Anchore.
In step #9, be sure to copy the metadata URL link so you have that. Anchore will need that value.
Right-click here and copy the link address: The
URL should be something like: https://<youraccount>.okta.com/app/<appid>/sso/saml/metadata
Finish the setup and save the Application entry.
Important: To allow Okta users to login to Anchore you need to assign the Okta user to this new Application entry. See
Assign and unassign apps to users for information on this process.
Configure Anchore Enterprise to use the Okta Identity Provider
You’ll need the following information from okta to enter in the Anchore UI:
The name chosen for the configuration: okta in this case
Metadata XML URL (from “configuring okta” step 3.1 above)
The Single Sign-on/ACS URL described in Step 3
In the Anchore UI, create an SSO Idp Configuration:
Login as admin
Select “Configuration” Tab on the top
Select “SSO” on the left-side menu
Click “Let’s Add One” in the configuration listing
And…
Enter the values:
Name: okta - This is the name of the configuration and will be referenced in login and sso URLs, so we use the value
chosen at the beginning of this example
Enabled: True - This controls whether or not users will be able to login with this configuration. We’ll enable it for
the example but can disable later if no longer needed.
ACS HTTPS Port: -1 or 443 - This is the port to use for HTTPS to the ACS (Assertion Consumer Service, in this case the UI). It is only needed if you need to use a non-standard https port
Default Account - The account to add all okta users to when they login, for this example we use oktausers
Default Role - The role to grant okta users when they login in initially. For this example, we use read-write, the standard user type that has most abilities except user management.
IDP Metadata URL - The url from “Configure Okta” step 3.1
Require Signed Assertions - Select On
Require Signed Response - Select On
Save the configuration, configuration is complete when you see a login with ‘okta’ option on the login screen.
Users can now log in to your Anchore deployment using this Okta endpoint.
See: Mapping Users and Roles in SSO for more information on using the account
and role defaults, IDP attribute values and understanding how identities are mapped into Anchore’s RBAC system.
In this section you will learn how to create accounts, users, and role assignment with the Anchore Enterprise UI.
Assumptions
You have a running instance of Anchore Enterprise and access to the UI.
You have the appropriate permissions to create accounts, users, and roles. This means you are either a user in the admin account, or a user that already is a member of the account-users-admin role for your account.
For more information on accounts, users, roles, and permissions see: Role Based Access Control
Navigation
After a successful login, navigate to the configuration tab on the main menu.
Creating Accounts
In order to create accounts, navigate to the accounts tab from inside the configuration view and select “Create New Account”.
Upon selection, a popup window will display asking for two items:
Account Name (required)
Email
In the following example I’ve created a ‘security’ account:
Now that a group has been created, I can begin to add users to it.
Viewing Role Permissions
To view the permissions associated with a specific role using the UI, select an account, and navigate to the roles tab:
To view the members in the account assigned to a specific role, select the ‘View’ button on the right-hand side.
Creating Users and assigning Roles
Upon immediate creation of an account, there will, by default be zero users. To add users, select the edit button corresponding the account you would like to add users to. This will bring you to the account page, where you can add your first user by selecting the “Let’s add one!” button.
Upon selection, a popup window will display asking for three items:
Username (required)
Password (required)
Assign Role(s)
Note that you can assign more than one role to a user. For a normal user with full access to add, update, and evaluate images, we recommend assigning the read-write role. The other roles are for specific use-cases such as CI/CD automation, and read-only access for reporting. See: Role Based Access Control from more details on the roles and their capabilities.
In this case I’ve assigned three roles to the user:
Once ‘OK’ is selected, the user will be created and you will be able to edit or remove the user as needed.
Deleting and Disabling Accounts
In order to delete an account, disable the account by sliding the button under the ‘Active’ column for the corresponding account, then select the ‘Remove’ button on the right-hand side.
A few notes to keep in mind when deleting accounts:
The ‘admin’ account is locked and cannot be deleted.
Once deletion is in progress, all resources (users, images, automated tasks, etc) will start a garbage collection process and won’t be viewable. Although it will still be present in the list to prevent admins from adding an account with the same name.
Once deleted, an account and their associated resources can’t be recovered.
A couple notes on disabling accounts:
Disabling accounts is a way for administrators to freeze an account while still keeping any associated analysis info intact.
Any automated tasks associated with the disabled account will be frozen.
Switching Account Data Context
System administrator users are able to view another account’s data context using the dropdown located at the top-right:
Generating API Keys
Enterprise release 5.1 adds support for API keys for various operations. This is to facilitate use-cases where the user does not want to expose their main credentials e.g. integrations can switch to using API keys instead of username/password credentials.
In order to generate an API key, navigate to the Enterprise UI and click on the top right button and select ‘API Keys’:
Clicking ‘API Keys’ will present a dialog that lists your active, expired and revoked keys:
To create a new API key, click on the ‘Create New API Key’ and this will open another dialog where it asks you for relevant details for the API key:
You can specify the following fields:
Name: The name of your API key. It is mandatory and unique i.e. you cannot have two API keys with the same name.
Description: An optional text descriptor for your API key.
Expiry Date: An expiry date for your API key, you cannot specify a date in the past and it cannot exceed 365 days by default.
Click save to save your API key, the UI will display the output of the operation:
NOTE!: Make sure you copy the value that’s output, there is no way to get this key value back.
Revoking API keys
If there is a situation where you feel your API key has been compromised, you can revoke an active key. This prevents the key from being used for authentication.
To revoke a key, click on the ‘Revoke’ button next to a key:
NOTE: Be careful revoking a key, this is an irreversible operation i.e. you cannot mark it active later.
The UI by default only displays active API keys, if you want to see your revoked and expired keys, check the toggle to ‘Show only active API keys’:
Managing API Keys as an Admin
As an account admin you can manage API keys for all users in the account you are admin in.
A global admin can manage API keys across all accounts and all users.
To access the API keys as an admin, click on the ‘System’ icon and navigate to ‘Accounts’:
Click ‘Edit’ for the account you want to manage keys for and click on the ‘Tools’ button against the user you wish to manage keys for:
4.22.2 - Accounts and Users
System Initialization
When the system first initializes it creates a system service account (invisible to users) and a administrator account (admin) with a single administrator user (admin). The password for this user is set at bootstrap using a default value or an override available in the config.yaml on the catalog service (which is what initializes the db). There are two top-level keys in the config.yaml that control this bootstrap:
default_admin_password - To set the initial password (can be updated by using the API once the system is bootstrapped). Defaults to foobar if omitted or unset.
default_admin_email - To set the initial admin account email on bootstrap. Defaults to admin@myanchore if unset
Managing Accounts Using AnchoreCTL
These operations must be executed by a user in the admin account. These examples are executed from within the enterprise-api container if using the quickstart guide:
First, exec into the enterprise-api container, if using the quickstart docker compose. For other deployment types (eg. helm chart into kubernetes), execute these commands anywhere you have AnchoreCTL installed that can reach the external API endpoint for you deployment.
docker compose exec enterprise-api /bin/bash
Getting Account and User Information
To list all the currently present accounts in the system, perform the following command:
Note that the email address is optional and can be omitted.
At this point the account exists but contains no users. To create a user with a password, see below in the Managing Users section.
Disabling Account
Disabling an account prevents any of that account’s users from being able to perform any actions in the system. It also disabled all asynchronous updates on resources in that account, effectively freezing the state of the account and all of its resources. Disabling an account is idempotent, if it is already disabled the operation has no effect. Accounts may be re-enabled after being disabled.
To restore a disabled account to allow user operations and resource updates, simply enable it. This is idempotent, enabling an already enabled account has no effect.
Deleting an account is irreversible and will delete all of its resources (images, policies, evaluations, reports, etc).
Deleting an account will synchronously delete all users and credentials for the account and transition the account to the deleting state. At this point the system will begin reaping all resources for the account. Once that reaping process is complete, the account record itself is deleted. An account must be in a disabled state prior to deletion. Failure to be in this state results in an error:
# anchorectl account delete devteam1
error: 1 error occurred:
* unable to delete account:
{
"detail": {
"error_codes": []
},
"httpcode": 400,
"message": "Invalid account state change requested. Cannot go from state enabled to state deleting"
}
So, first you must disable the account, as shown above. Once disabled:
Users exist within accounts, but usernames themselves are globally unique since they are used for authenticating api requests. User management can be performed by any user in the admin account in the default Anchore Enterprise configuration using the native authorizer. For more information on configuring other authorization plugins see: Authorization Plugins and Configuration.
Create User in a User-Type Account
To create a new user credential within a specified account, you can issue the following command. Note that the ‘role’ assigned will dictate the API/operation level permissions granted to this new user. See help output for a list of available roles, or for more information you can review roles and associated permissions via the Anchore Enterprise UI. In the following example, we’re granting the new user the ‘full-control’ role, which gives the credential full access to operations within the ‘devteam1’ account namespace.
# ANCHORECTL_USER_PASSWORD=devteam1adminp4ssw0rd anchorectl user add --account devteam1 devteam1admin --role full-control
✔ Added user devteam1admin
Username: devteam1admin
Created At: 2022-08-25T17:50:18Z
Last Updated: 2022-08-25T17:50:18Z
Source:
Type: native
# anchorectl user list --account devteam1
✔ Fetched users
┌───────────────┬──────────────────────┬──────────────────────┬────────┬────────┐
│ USERNAME │ CREATED AT │ LAST UPDATED │ SOURCE │ TYPE │
├───────────────┼──────────────────────┼──────────────────────┼────────┼────────┤
│ devteam1admin │ 2022-08-25T17:50:18Z │ 2022-08-25T17:50:18Z │ │ native │
└───────────────┴──────────────────────┴──────────────────────┴────────┴────────┘
That user may now use the API:
# ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=devteam1adminp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 anchorectl user list
✔ Fetched users
┌───────────────┬──────────────────────┬──────────────────────┬────────┬────────┐
│ USERNAME │ CREATED AT │ LAST UPDATED │ SOURCE │ TYPE │
├───────────────┼──────────────────────┼──────────────────────┼────────┼────────┤
│ devteam1admin │ 2022-08-25T17:50:18Z │ 2022-08-25T17:50:18Z │ │ native │
└───────────────┴──────────────────────┴──────────────────────┴────────┴────────┘
Deleting a User
Using the admin credential, or a credential that has a user management role assigned for an account, you can delete a user with the following command. In this example, we’re using the admin credential to delete a user in the ‘devteam1’ account:
ANCHORECTL_USERNAME=admin ANCHORECTL_ACCOUNT=admin ANCHORECTL_PASSWORD=foobar anchorectl user delete devteam1admin --account devteam1
✔ Deleted user
No results
Updating a User Password
Note that only system admins can execute this for a different user/account.
As an admin, to reset another users credentials:
# ANCHORECTL_USER_PASSWORD=n3wp4ssw0rd anchorectl user set-password devteam1admin --account devteam1
✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T17:58:32Z
To update your own password:
# ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=existingp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 anchorectl user set-password devteam1admin
❖ Enter new user password : ●●●●●●●●●●●
❖ Retype new user password : ●●●●●●●●●●●
✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T18:00:35Z
Or, to perform the operation fully-scripted, you can set the new password as an environment variable:
ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=existingp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 ANCHORECTL_USER_PASSWORD=n3wp4ssw0rd anchorectl user set-password devteam1admin
✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T18:01:19Z
4.22.3 - Data Account/Context Switching
Overview
Administrators and specially-entitled standard users are offered the ability to context switch between the image analysis data contexts of different accounts. This capability allows you to view the analysis data held inside a different account while still retaining your own user profile configuration.
When you switch data context, the data-oriented aspects of the application will change but the qualities specific to your original account—herein referred to as your actual account—remain the same. Administrators keep their original permission set and have full control within the switched account. The account availability and associated permission set for standard users is decided by the role configuration of their switching entitlement, and these roles can be additionally set to differ per account.
This feature allows users to gain insights into multiple datasets, can be used by administrators for troubleshooting purposes or to make ad-hoc modifications to the data-oriented aspects of any account, and provides standard users with an additional level and vector of access control.
This following sections in this document describes how to switch and reset data contexts—both as an administrator and as a standard user—and how administrators can assign this capability to standard users.
Administrative Users
Context switching as as an administrator is available without prior configuration, and only requires that an account other than your own be available. When you click the account button in the top-right of the screen you are presented with a menu that contains an entry called Switch Account Data Context, which will be enabled when one or more accounts other than your own are present.
Clicking this item displays a submenu that describes all currently available accounts—both active and disabled—into which you can switch context:
Your home account is represented by the label Actual. If an account is disabled, this is indicated by the label Disabled (note that only administrators can context switch into disabled accounts). The account category—administrator or standard user—is indicated by the user-type icon.
Your current data context is represented by an entry with an emphasized title and checkmark prefix. When you click an entry for a different account, the application view will switch to use the data provided by this new context. The account button and dropdown items are similarly updated to reflect this change:
You will also notice a change to the background color of the main view, which serves as a reminder that your current data context is now different than the one provided by your actual account. In addition, a button is now present on the navigation bar that allows you to immediately revert to your actual data context when clicked (you can of course also use the menu to do this):
In the above example, the analysis information now presented is exactly what a user of the standard account would see in their actual account. As an administrator, you are now free to browse and interact with this data, add tags or repositories for analysis, create policies etc., and there are no permission restrictions on any of these operations.
Note: only the analysis data context has switched, and this new state does not extend to application data items such a private registry configurations.
Standard Users
Non-administrative users can also switch context if this capability has been conferred upon them by an administrator.
When you add a new standard user (or modify an existing one) you can optionally associate them with one or more additional accounts, providing those accounts are not currently disabled. The Add a New User dialog, which is accessed from within the account editor in the Configuration > Accounts view, is shown below:
Note: If an account is currently active and available for addition, but is subsequently disabled, the standard user will not be able to switch into that account.
For each associated account you must also provide one or more RBAC roles that determine how the standard user can interact with that account after they have switched context:
For example, a user may have full-control within their actual account, but could be restricted to read-only operations after switching context. You can provide multiple different roles for different accounts, but you must provide at least one role per account association:
Account associations can also be removed by clicking the X adjacent to each role list, or by removing the labels directly from the Associate Account(s) dropdown control.
Once you are satisfied with the user configuration, click OK to create (or update) these associations. The standard user will now be able to switch account data context using the same procedure as the one described for administrators, presented earlier in this document.
4.22.4 - Role-Based Access Control
Overview
Anchore Enterprise includes support for using Role-Based Access Control (RBAC) to control the permissions that a specific user has to a specific set of resources in the system. This allows administrators to configure specific, limited permissions on user enabling limited access usage for things like CI/CD automation accounts, read-only users for browsing analysis output, or security team users that can modify policy but not change image analysis configuration.
Anchore Enterprise provides a predefined of roles. Please see table below for complete list.
The Enterprise UI contains an enumeration of the specific permissions granted to users that are members of each of the roles.
Roles, Users, and Accounts
Roles are applied within the existing account and user frameworks defined in Anchore Enterprise. Resources are still scoped to the account namespace and accounts provide full resource isolation (e.g. an image must be analyzed within an account to be referenced in that account). Roles allow users to be granted permissions in both the account to which they belong as well as external accounts to facilitate resource-sharing.
Terminology
User: An authenticated identity (a principal in rbac-speak).
Account: A resource namespace and user grouping that also defines an authorization domain to which permissions are applied.
Role: A named set of permissions.
Permission: An action to grant an operation on a set of resources.
Action: The operation to be executed, such as listImages, createRegistry, and updateUser.
Target: The resource to be operated on, such as an image digest.
Role Membership: Mapping a username to a role within a specific account. This confers the permissions defined by the role to resources in the specified account to the specified user. The user is not required to be a member of the account itself.
Constraints
A user may be a member of a role within one or more accounts.
A user may be a member of many roles, or no roles.
There is no default role set on a user when a user is created. Membership must be explicitly set.
Roles are immutable. The set of actions they grant is static.
Creating and deleting accounts is only available to users in the admin account. The scope of accounts and roles is different than other resources because they are global. The authorization domain for those resources is not the account name but rather the global domain: system.
Role Summary and Permissions
Role
Allowed Actions
Description
system-admin
All actions within all domains.
Administrative control over all domains within the system. USE WITH EXTREME CAUTION
full-control
All actions within a specific account domain.
Full control over any account granted for. USE WITH EXTREME CAUTION
Manage account creation and addition of users to accounts.
account-viewer
listAccounts
Role which can list all accounts on the system. This role is only available for use in the system domain. This role can only be conferred by a system administrator.
Note: All account scoped roles have these roles implicitly granted as well: selfListApiKeys, selfCreateApiKey, selfUpdateApiKey, selfDeleteApiKey, selfGetApiKey, selfGetCredentials, selfAddCredential, selfDeleteCredential
Granting Cross-Account Access
The Anchore API supports a specific mechanism for allowing a user to make requests in another account’s namespace, the x-anchore-account header. By including x-anchore-account: "desiredaccount" on a request, a user can attempt that request in the namespace of the other account. This is subject to full authorization and RBAC.
To grant a username the ability to execute operations in another account, simply make the username a member of a role in the desired account. This can be accomplished in the UI or via API against the RBAC Manager service endpoint. For example, using curl:
This should be done with caution as there is currently no support for resource-specific access controls. A user will have the permitted actions on all resources in the other account (based on the permissions of the role). For example, making a user a member of policy-editor role for another account will enable full ability to create, delete, and update that account’s policy bundles.
WARNING: Because roles do not currently provide custom target/resource definitions, assigning users to the Account User Admin role for an account other than their own is dangerous because there are no guards against that user then removing permissions of the granting user (unless that user is an ‘admin’ account user), so use with extreme caution.
NOTE: admin account users are not subject to RBAC constraints and therefore always have full access to add/remove users to roles in any account. So, while it is possible to grant them role membership, the result is a no-op and does not change the permissions of the user in any way.
4.22.4.1 - User Groups
Overview
User groups are abstractions that allow an administrator to manage permissions for users across the system without having to manage each individual user’s permissions.
Administrators simply have to create a user group, define roles per accounts within the user group and then associate users with it. Users can be associated with multiple user-groups. Each user inherits roles from their user group as well as any explicitly defined roles.
Users can be explicitly added to a User Group (as described above) or SAML users can have an indirect membership of a user group based on their IDP associations.
Note: User Group management is strictly limited to admin users only.
Terminology
User Group: A basic resource that grants roles and permissions to users on various accounts
"name": "user-group-engineers",
"description": "The group permissions for all engineers",
User Group Roles: A collection of roles associated with a user group, this can span multiple accounts and have multiple roles per account. E.g.
IDP User Group Mappings: A set of User Groups that are mapped to a single Identity provider. E.g.
{
IDP Name: "keycloak",
User Groups: [“user-group-engineers”, ”user-group-devsec”, ”user-group-auditors”]}
User Group Native User Member: A native user who has been explicitly associated with a User Group. This user inherits all roles from the User Group in addition to any roles assigned directly to this user.
User Group IDP Member: An SAML user who is an indirect member of a User Group. As the SAML user authenticates, the IDP’s User Group Mappings are used to determine if this user should be associated with a User Group.
Native users
Native users are users that are defined in Anchore Enterprise and do not authenticate using an external SSO endpoint.
These users can be added to User Groups directly and inherit roles from the User Groups they are members of.
SAML(SSO) users
SAML users are users that authenticate using an external SAML IDP.
These users can be associated with User Groups based on their group memberships in the SAML IDP.
SAML users are automatically added to a User Group based on their group memberships in the SAML IDP and the IDP’s User Group associations.
User Group management
User Groups can be managed from the Anchore Enterprise UI or using the Anchore Enterprise API.
AnchoreCTL
User Groups can be managed using the anchorectl CLI tool. The following commands are available for User Group management:
To create a new User Group, use the following command:
# anchorectl usergroup add development --description "The development team"
✔ Added usergroup
Name: development
Description: The development team
Group Uuid: 4a5d8357-1fc3-44cf-8a1c-9882406df656
Created At: 2024-03-20T15:57:20.086665Z
Last Updated: 2024-03-20T15:57:20.086669Z
Account Roles:
Items:
To list all User Group, use the following command:
# anchorectl usergroup list
┌─────────────┬──────────────────────┬──────────────────────────────────────┐
│ NAME │ DESCRIPTION │ GROUP UUID │
├─────────────┼──────────────────────┼──────────────────────────────────────┤
│ development │ The development team │ 4a5d8357-1fc3-44cf-8a1c-9882406df656 │
└─────────────┴──────────────────────┴──────────────────────────────────────┘
To edit the description of a User Group, use the following command:
# anchorectl usergroup update development --description "New development team description"
✔ Update usergroup
Name: development
Description: New development team description
Group Uuid: 4a5d8357-1fc3-44cf-8a1c-9882406df656
Created At: 2024-03-20T15:57:20.086665Z
Last Updated: 2024-03-20T16:00:17.989822Z
Account Roles:
Items:
To delete a User Group, use the following command:
# anchorectl usergroup delete development
✔ Deleted usergroup
No results
To add an account role to a User Group, use the following command:
To list all account roles for a User Group, use the following command:
# anchorectl usergroup role list development
✔ Fetched usergroups accounts and roles
┌────────────────┬───────────────────────────────────────────────────────────┐
│ ACCOUNT/DOMAIN │ ROLES │
├────────────────┼───────────────────────────────────────────────────────────┤
│ dev_account │ image-analyzer, image-developer, read-only, repo-analyzer │
│ devops_account │ read-only │
└────────────────┴───────────────────────────────────────────────────────────┘
To remove account role(s) from a User Group, use the following command:
# anchorectl usergroup role delete development dev_account --role image-analyzer,image-developer
✔ Deleted role
No results
To add a native user to a User Group, use the following command:
# anchorectl usergroup user add development -u dev_user
✔ Added user(s)
┌──────────┬─────────────────────────────┐
│ USERNAME │ ADDED TO USER GROUP ON │
├──────────┼─────────────────────────────┤
│ dev_user │ 2024-03-20T16:30:20.092909Z │
└──────────┴─────────────────────────────┘
To list all members of a User Group, use the following command:
# anchorectl usergroup user list development
✔ Fetched users within usergroup
┌──────────┬─────────────────────────────┐
│ USERNAME │ ADDED TO USER GROUP ON │
├──────────┼─────────────────────────────┤
│ dev_user │ 2024-03-20T16:30:20.092909Z │
└──────────┴─────────────────────────────┘
To remove a native user from a User Group, use the following command:
# anchorectl usergroup user delete development -u dev_user
✔ Deleted user(s)
No results
4.23 - Windows Container Scanning
Anchore can analyze and provide vulnerability matches for Microsoft Windows images. Anchore downloads, unpacks, and analyzes the Microsoft Windows image contents similar to Linux-based images, providing OS information as well as discovered application packages like npms, gems, python, NuGet, and java archives.
Vulnerabilities for Microsoft Windows images are matched against the detected operating system version and KBs installed in the image. These are matched using data from the Microsoft Security Research Center (MSRC) data API.
Supported Windows Base Image Versions
The following are the MSRC Product IDs that Anchore can detect and provide vulnerability information for. These provide the basis for the main variants of the base
Windows containers: Windows, ServerCore, NanoSerer, and IoTCore
Product ID
Name
10951
Windows 10 Version 1703 for 32-bit Systems
10952
Windows 10 Version 1703 for x64-based Systems
10729
Windows 10 for 32-bit Systems
10735
Windows 10 for x64-based Systems
10789
Windows 10 Version 1511 for 32-bit Systems
10788
Windows 10 Version 1511 for x64-based Systems
10852
Windows 10 Version 1607 for 32-bit Systems
10853
Windows 10 Version 1607 for x64-based Systems
11497
Windows 10 Version 1803 for 32-bit Systems
11498
Windows 10 Version 1803 for x64-based Systems
11563
Windows 10 Version 1803 for ARM64-based Systems
11568
Windows 10 Version 1809 for 32-bit Systems
11569
Windows 10 Version 1809 for x64-based Systems
11570
Windows 10 Version 1809 for ARM64-based Systems
11453
Windows 10 Version 1709 for 32-bit Systems
11454
Windows 10 Version 1709 for x64-based Systems
11583
Windows 10 Version 1709 for ARM64-based Systems
11644
Windows 10 Version 1903 for 32-bit Systems
11645
Windows 10 Version 1903 for x64-based Systems
11646
Windows 10 Version 1903 for ARM64-based Systems
11712
Windows 10 Version 1909 for 32-bit Systems
11713
Windows 10 Version 1909 for x64-based Systems
11714
Windows 10 Version 1909 for ARM64-based Systems
10379
Windows Server 2012 (Server Core installation)
10543
Windows Server 2012 R2 (Server Core installation)
10816
Windows Server 2016
11571
Windows Server 2019
10855
Windows Server 2016 (Server Core installation)
11572
Windows Server 2019 (Server Core installation)
11499
Windows Server, version 1803 (Server Core Installation)
11466
Windows Server, version 1709 (Server Core Installation)
11647
Windows Server, version 1903 (Server Core installation)
11715
Windows Server, version 1909 (Server Core installation)
Windows Operating System Packages
Just as Linux images are scanned for packages such as RPMs, DPKG, and APK, Windows images are scanned for the installed components and Knowledge Base patches (KBs). When listing operating system content on a Microsoft Windows image, the results returned are KB identifiers that are numeric. Both the name and version will
be identical and are the KB IDs.
5 - Monitoring
After you have installed Anchore Enterprise, there are various ways to monitor its operations:
Added in Anchore Enterprise 2.2, the Health section within the System tab is an administrator’s new display for investigating the operational status of their system’s various services and feeds. Leverage this view to understand when your system is ready or if it requires intervention.
The following sections in this document describe how to determine system readiness, the state of your services, and the progression of your feed sync.
For more information on the overall architecture of a full Anchore Enterprise deployment, please refer to the Architecture documentation. Or refer to the Feeds Overview if you’re interested in the feeds-side of things.
System Readiness
Ready
(Tentatively) Ready
Not Ready
The indicator for system readiness can be seen from any screen by viewing the System tab header:
The system readiness status relies on the service and feed data which are routinely updated every 5 minutes. Using the example indicator provided above, once all the feed groups are successfully synced, the status icon will turn green.
For up-to-date information outside of the normal update cycle, navigate to the Health section within the System tab and click on Refresh Service Health, Refresh Feed Data, or manually refresh the page.
Services
As shown above and as of 2.2, there are five services required by the system to function (API, Catalog, Policy Engine, SimpleQueue, and Analyzer).
For every service, the Base URL, Host ID, and Version is displayed. As long as one instance of each service is up and available, the main system is regarded as ready. In the example image provided above, we see that we have multiple instances of the Policy Engine and Analyzer services.
For the full, filterable list of instances for that service, click on the numbers provided. In the case of the Policy Engine, that would be the 1/2 Available.
Note that orphaned services are filtered out by default in this view (with a toggle to include it again) but will still impact the availability count on the main page.
In the case of service errors, they are logged within the Events & Notifications tab so we recommend following up there for more information or browse our Troubleshooting documentation for remediation guidance.
Feeds Sync
Listed in this section are the various feed groups your system relies on for vulnerability and package data. This data comes from a variety of upstream sources which is vital for policy engine operations such as evaluating policies or listing vulnerabilities.
As shown, you can keep track of your sync progression using the Last Sync column. To manually update the feed data displayed outside of its normal 5-minute cycle, click the Refresh Feed Data button or refresh the page.
If you’d rather have them grouped by feed rather than listed out individually, you can toggle the layout from list to cards using the buttons in the top-right corner above the table:
Similar to the service cards, if you decide to have them grouped as we show below using the layout buttons, you can click on the number of groups synced to view the full, filterable list within.
When viewing a list of feed groups - whether through the default list or through a specific feed card - you can filter for a specific value using the input provided or click on the button attached to filter by category. In this case, groups can be filtered by whether they are synced or unsynced.
In the case of feed sync errors, they are logged within the Events & Notifications tab so we recommend following up there for more information or browse our Troubleshooting documentation for remediation guidance.
Or if you’re interested in an overview of the various drivers Enterprise Feeds uses, check out our Feeds Overview.
5.2 - Prometheus
Anchore Enterprise exposes prometheus metrics in the API of each service if the
config.yaml that is used by that service has the metrics.enabled key set to
true.
Each service exports its own metrics and is typically scraped by a Prometheus
installation to gather the metrics. Anchore does not aggregate or distribute
metrics between services. You should configure your Prometheus deployment or
integration to check each Anchore service’s API using the same port it exports
for the /metrics route.
Monitoring in Kubernetes and/or Helm Chart
Prometheus is very commonly used for monitoring Kubernetes clusters. Prometheus
is supported by core Kubernetes services. There are many guides on using
Prometheus to monitor a cluster and services deployed within, and also many
other monitoring systems can consume Prometheus metrics.
The Anchore Helm Chart includes a quick way to enable the Prometheus metrics on
each service container:
Or, set it directly in your customized values.yaml
The specific strategy for monitoring services with prometheus is outside the
scope of this document. But, because Anchore exposes metrics on the /metrics
route of all service ports, it should be compatible with most monitoring
approaches (daemon sets, side-cars, etc).
Metrics of Note
Anchore services export a range of metrics. The following list shows some
Anchore services that can help you determine the health and load of an Anchore
deployment.
anchore_queue_length, specifically for queuename: “images_to_analyze”
This is the number of images pending analysis, in the not_analyzed state.
As this number grows you can expect longer analysis times.
Adding more analyzers to a system can help drain the queue faster and keep
wait times to a minimum.
This metric is exported from all simplequeue service instances, but is based
on the database state, so they should all present a consistent view of the
length of the queue.
anchore_monitor_runtime_seconds_count
These metrics, one for each monitor, record the duration of the async
processes as they execute on a duty cycle.
As the system grows, these will become longer to account for more tags to
check for updates, repos to scan for new tags, and user notifications to
process.
anchore_tmpspace_available_bytes
This metric tracks the available space in the “tmp_dir” location for each
container. This is most important for the instances that are analyzers where
this can indicate how much disk is being used for analysis and how much
overhead there is for analyzing large images.
This is expected to be consumed in cycles, with usage growing during
analysis and then flushing upon completion. A consistent growth pattern here
may indicate left over artifacts from analysis failures or a large
layer_cache setting that is not yet full. The layer cache (see Layer
Caching) is located in this space and thus will affect the metric.
process_resident_memory_bytes
This is the memory actually consumed by the instance, where each instance is
a service process of Anchore. Anchore is fairly memory intensive for large
images and in deployments with lots of analyzed images due to lots of json
parsing and marshalling, so monitoring this metric will help inform capacity
requirements for different components based on your specific workloads. Lots
of variables affect memory usage, so while we give recommendations in the
Capacity Planning document, there is no substitute for profiling and
monitoring your usage carefully.
5.3 - Event Log
Introduction
The event log subsystem provides the users with a mechanism to inspect asynchronous events occurring across various Anchore Enterprise services. Anchore events include periodically triggered activities such as vulnerability data feed syncs in the policy-engine service, image analysis failures originating from the analyzer service, and other informational or system fault events. The catalog service may also generate events for any repositories or image tags that are being watched, when the engine encounters connectivity, authentication, authorization or other errors in the process of checking for updates. The event log is aimed at troubleshooting most common failure scenarios (especially those that happen during asynchronous engine operations) and to pinpoint the reasons for failures, that can be used subsequently to help with corrective actions. Events can be cleared from anchore-engine in bulk or individually.
The Anchore events (drawn from the event log) can be accessed through the Anchore Enterprise API and AnchoreCTL, or can be emitted as webhooks if your Anchore Enterprise is configured to send webhook notifications. For API usage refer to the document on using the Anchore Enterprise API.
Accessing Events
The anchorectl command can be used to list events and filter through the results, get the details for a specific event and delete events matching certain criteria.
# anchorectl event --help
Event related operations
Usage:
event [command]
Available Commands:
delete Delete an event by its ID or set of filters
get Lookup an event by its event ID
list Returns a paginated list of events in the descending order of their occurrence
Flags:
-h, --help help for event
Use " event [command] --help" for more information about a command.
For help regarding global flags, run --help on the root command
Note: Events are ordered by the timestamp of their occurrence, the most recent events are at the top of the list and the least recent events at the bottom.
There are a number of ways to filter the event list output (see anchorectl event list --help for filter options):
For troubleshooting events related to a specific event type:
# anchorectl event list --event-type system.analysis_archive.image_archive_failed
✔ List events
┌──────────────────────────────────┬──────────────────────────────────────────────┬───────┬──────────────┬───────────────┬────────────────┬────────────────────┬────────────────────────────┐
│ UUID │ EVENT TYPE │ LEVEL │ RESOURCE ID │ RESOURCE TYPE │ SOURCE SERVICE │ SOURCE HOST │ TIMESTAMP │
├──────────────────────────────────┼──────────────────────────────────────────────┼───────┼──────────────┼───────────────┼────────────────┼────────────────────┼────────────────────────────┤
│ 35114639be6c43a6b79d1e0fef71338a │ system.analysis_archive.image_archive_failed │ error │ nginx:latest │ image_digest │ catalog │ anchore-quickstart │ 2022-08-24T22:48:23.18113Z │
└──────────────────────────────────┴──────────────────────────────────────────────┴───────┴──────────────┴───────────────┴────────────────┴────────────────────┴────────────────────────────┘
To filter events by level such as ERROR or INFO:
anchorectl event list --level info
✔ List events
┌──────────────────────────────────┬─────────────────────────────────────────────┬───────┬─────────────────────────────────────────────────────────────────────────┬───────────────┬────────────────┬────────────────────┬─────────────────────────────┐
│ UUID │ EVENT TYPE │ LEVEL │ RESOURCE ID │ RESOURCE TYPE │ SOURCE SERVICE │ SOURCE HOST │ TIMESTAMP │
├──────────────────────────────────┼─────────────────────────────────────────────┼───────┼─────────────────────────────────────────────────────────────────────────┼───────────────┼────────────────┼────────────────────┼─────────────────────────────┤
│ 60f14821ff1d407199bc0bde62f537df │ system.image_analysis.restored_from_archive │ info │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest │ catalog │ anchore-quickstart │ 2022-08-24T22:53:12.662535Z │
│ cd749a99dca8493889391ae549d1bbc7 │ system.analysis_archive.image_archived │ info │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest │ catalog │ anchore-quickstart │ 2022-08-24T22:48:45.719941Z │
...
Note: Event listing response is paginated, anchorectl displays the first 100 events matching the filters. For all the results use the –all flag.
All available options for listing events:
# anchorectl event list --help
Returns a paginated list of events in the descending order of their occurrence. Optional query parameters may be used for filtering results
Usage:
event list [flags]
Flags:
--all return all events (env: ANCHORECTL_EVENT_ALL)
--before string return events that occurred before the ISO8601 formatted UTC timestamp
(env: ANCHORECTL_EVENT_BEFORE)
--event-type string filter events by a prefix match on the event type (e.g. "user.image.")
(env: ANCHORECTL_EVENT_TYPE)
-h, --help help for list
--host string filter events by the originating host ID (env: ANCHORECTL_EVENT_SOURCE_HOST_ID)
--level string filter events by the level - INFO or ERROR (env: ANCHORECTL_EVENT_LEVEL)
-o, --output string the format to show the results (allowable: [text json json-raw id]; env: ANCHORECTL_FORMAT) (default "text")
--page int32 return the nth page of results starting from 1. Defaults to first page if left empty
(env: ANCHORECTL_PAGE)
--resource-type string filter events by the type of resource - tag, imageDigest, repository etc
(env: ANCHORECTL_EVENT_RESOURCE_TYPE)
--service string filter events by the originating service (env: ANCHORECTL_EVENT_SOURCE_SERVICE_NAME)
--since string return events that occurred after the ISO8601 formatted UTC timestamp
(env: ANCHORECTL_EVENT_SINCE)
For help regarding global flags, run --help on the root command
Event listing displays a brief summary of the event, to get more detailed information about the event such as the host where the event has occurred or the underlying the error:
# anchorectl event get c31eb023c67a4c9e95278473a026970c
✔ Fetched event
UUID: c31eb023c67a4c9e95278473a026970c
Event:
Event Type: system.image_analysis.registry_lookup_failed
Level: error
Message: Referenced image not found in registry
Resource:
Resource ID: docker.io/aerospike:latest
Resource Type: image_reference
User Id: admin
Source:
Source Service: catalog
Base Url: http://catalog:8228
Source Host: anchore-quickstart
Request Id:
Timestamp: 2022-08-24T22:08:28.811441Z
Category:
Details: cannot fetch image digest/manifest from registry
Created At: 2022-08-24T22:08:28.812749Z
Clearing Events
Events can be cleared/deleted from the system in bulk or individually. Bulk deletion allows for specifying filters to clear the events within a certain time window. To delete all events from the system:
# anchorectl event delete --all
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure you want to delete all events:
▸ Yes
No
⠙ Deleting event
c31eb023c67a4c9e95278473a026970c
329ff24aa77549458e2656f1a6f4c98f
649ba60033284b87b6e3e7ab8de51e48
4010f105cf264be6839c7e8ca1a0c46e
...
Delete events before a specified timestamp (can also use --since instead of --before to delete events that were generated after a specified timestamp):
In addition to access via API and AnchoreCTL, the Anchore Enterprise may be configured to send notifications for events as they are generated in the system via its webhook subsystem. Webhook notifications for event log records is turned off by default. To turn enable the ’event_update’ webhook, uncomment the ’event_log’ section under ‘services->catalog’ in config.yaml, as in the following example:
services:
...
catalog:
...
event_log:
notification:
enabled: True
# (optional) notify events that match these levels. If this section is commented, notifications for all events are sent
level:
- error
Note: In order for events to be sent via webhook notifications, you’ll need to ensure that the webhook subsystem is configured in config.yaml (if it isn’t already) - refer to the document on subscriptions and notifications for information on how to enable webhooks in Anchore Enterprise. Event notifications will be sent to ’event_update’ webhook endpoint if it is defined, and the ‘general’ webhook endpoint otherwise.
Events via the UI
The Events tab is your gateway to current and historical activity happening in your system. View various events such as policy evaluation and vulnerability updates, system errors, feed syncs, and more.
The following sections in this document describe how to view event details, how to filter for specific events you’re interested in, and how to manage events with bulk deletion.
Viewing Events
In order to view events, navigate to the Events & Notifications > View Events tab. By default, the most recent activity (up to 1000 events) is shown and is automatically updated for you every 5 minutes. Note that if you have applied any filters through the search bar, your results will need to be refreshed manually.
Top-level details such as the event’s level (whether it’s an INFO or ERROR event), type, message, and affected resource is shown. Dig in to a specific event by clicking View Details under its Actions column to expand the row.
Additional information such as the origininating service and host ID are available in the expanded row. Any details given by the service are also provided in JSON format to view or copy to clipboard.
Filtering Events
Often, you might want to search for a specific event type or events that happened after a certain time. In this case, use the Search Events bar near the top of the page to select a filter to search on. These include:
Level
Filter events by level - INFO or ERROR
Event Type
Filter events by a match on the event type (e.g. “user.image.*”)
Since
Return events that occurred after the timestamp
Before
Return events that occurred before the timestamp
Source Servicename
Filter events by the originating service
Source Host ID
Filter events by the originating host ID
Resource Type
Filter events by the type of resource - tag, imageDigest, repository, etc.
Resource ID
Filter events by the id of the resource
Once you have selected and populated the filter fields you’re interested in, click Apply Filters to search and show those filtered results.
An alternative way to filter your results is through the in-table filter input. Note that this only applies against any data already fetched. To increase what you’re filtering on, click Fetch More near the top-right of the table for up to an additional 1000 items.
To remove any filters and reset to the default view, click Clear Filters.
Deleting Events
To assist with event management, event deletion has been added in the Enterprise 2.3 release.
Deleting individual events can be done simply through clicking Delete under the Actions column and selecting Yes to confirm. Note that after deletion, events are not recoverable.
Multi-select is available for deleting multiple events at a time. Upon selecting an event using the checkbox in the far-left column, a toolbar-like component will slide in at the bottom of the table. The number of events selected is shown along with the selection type, Clear Selection, and Delete Events options.
Checking the box in the header will select all events within that page.
By default, it is viewed as a Custom selection. Choosing to select All Retrieved events auto-selects everything already fetched and present in the table (i.e. if a filter is applied, events not matching the filter are not selected but will be upon removal of the filter). In this state, deselecting an item will trigger a custom selection again.
Selecting All events will again auto-select all events already fetched and present in the table but while applying a filter may modify what’s viewable, this option is solely for clearing the entire backlog of events - including those not shown. In this state, deselecting an item will also trigger a custom selection.
Once you have selected the events you wish to remove, click Delete Events to open a modal and review up to 50 items. Any events you don’t wish to delete anymore can be deselected as well. To continue with removal, click Yes to confirm and start the process.
Note that events are account-wide and that any events removed will be mirrored across all users in the account.
6 - Upgrading Anchore Enterprise
Upgrading from one version of Anchore Enterprise to another is normally handled seamlessly by the Helm chart or the docker compose configuration files that are provided along with each release. Those follow the general methods from this guide. See Specific Instructions section for special instructions related to specific versions.
Upgrade scenarios
Anchore Enterprise is distributed as a docker image, which is composed of smaller micro-services that can be deployed in a single container or scaled out to handle load.
To retrieve the version of a running instance of Anchore, the anchorectl system status command can be run. The last column titled “CODE VERSION”, will display the running version of each service.
anchorectl system status
✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE │ HOST ID │ URL │ UP │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer │ anchore-quickstart │ http://analyzer:8228 │ true │ available │ 25 │ 4.9.5 │
│ apiext │ anchore-quickstart │ http://api:8228 │ true │ available │ 25 │ 4.9.5 │
│ rbac_manager │ anchore-quickstart │ http://rbac-manager:8228 │ true │ available │ 25 │ 4.9.5 │
│ notifications │ anchore-quickstart │ http://notifications:8228 │ true │ available │ 25 │ 4.9.5 │
│ catalog │ anchore-quickstart │ http://catalog:8228 │ true │ available │ 25 │ 4.9.5 │
│ rbac_authorizer │ anchore-quickstart │ http://rbac-authorizer:8228 │ true │ available │ 25 │ 4.9.5 │
│ reports_worker │ anchore-quickstart │ http://reports-worker:8228 │ true │ available │ 25 │ 4.9.5 │
│ reports │ anchore-quickstart │ http://reports:8228 │ true │ available │ 25 │ 4.9.5 │
│ simplequeue │ anchore-quickstart │ http://queue:8228 │ true │ available │ 25 │ 4.9.5 │
│ policy_engine │ anchore-quickstart │ http://policy-engine:8228 │ true │ available │ 25 │ 4.9.5 │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘
In this example the Anchore version is 4.9.5 and the database schema is version 25. In cases where the database schema is changed between releases, Anchore will upgrade the database schema at launch.
Pre-upgrade Procedure
Prior to upgrading Anchore, it is highly recommended to perform a database backup/snapshot by stopping your Anchore installation, and backup the database in its entirety. There is no automatic downgrade capability, thus the only way to downgrade after an upgrade (whether it succeeds or fails) is to restore your database contents to a state from a prior version of Anchore, and explicitly run the compatible version of Anchore against the corresponding database contents.
Whether you wish to have the ability to downgrade or not, we recommend backing up your Anchore database prior to upgrading the software as a best practice.
Upgrade Procedure (for deployments using Helm)
A Helm pre-upgrade hook initiates a Kubernetes job that scales down all active Anchore Enterprise pods and handles the Anchore database upgrade.
The Helm upgrade is marked as successful only upon the job’s completion. This process causes the Helm client to pause until the job finishes and new Anchore Enterprise pods are initiated. To monitor the upgrade, follow the logs of the upgrade jobs. These jobs are automatically removed after a subsequent successful Helm upgrade.
An optional post-upgrade hook is available to perform Anchore Enterprise upgrades without forcing all pods to terminate prior to running the upgrade. This is the same upgrade behavior that was enabled by default in the legacy anchore-engine chart. To enable the post-upgrade hook, set upgradeJob.usePostUpgradeHook=true in your values file.
For the latest upgrade instructions using the Helm chart, please refer to the official Anchore Helm Chart documentation
Review the latest docker-compose.yaml and merge any edits/changes from your original docker-compose.yaml.backup to the latest docker-compose.yaml
Restart the Anchore containers
docker compose up -d
To monitor the progress of your upgrade, you can watch the docker logs from your catalog container, where you should see some initial output indicating whether or not an upgrade is needed or being performed, followed by the regular Anchore log output.
docker compose logs -f catalog
Once completed, you can review the new state of your Anchore install to verify the new version is running using the regular system status command.
anchorectl system status
Advanced / Manual Upgrade Procedure
If for any reason the automated upgrade fails, or you would like to perform the upgrade of the anchore database manually, you can use the following (general) procedure. This should only be done by advanced operators after backing up the anchore database, ensuring that the anchore database is up and running, and that all running anchore components are stopped.
Install the desired Anchore container manually.
Run the Anchore container but override the entrypoint to run an interactive shell instead of the default ‘anchore-manager service start’ entrypoint command.
Manually execute the database upgrade command, using the appropriate db_connect string. For example, if using Postgres, the db_connect string will look like postgresql://$ANCHORE_DB_HOST/$ANCHORE_DB_NAME?user=$ANCHORE_DB_USER&password=$ANCHORE_DB_PASSWORD
$ anchore-manager db --db-connect "postgresql://$ANCHORE_DB_HOST/$ANCHORE_DB_NAME?user=$ANCHORE_DB_USER&password=$ANCHORE_DB_PASSWORD" upgrade
[MainThread][anchore_manager.cli.utils/connect_database()][INFO] DB params: {"db_connect_args": {"timeout": 86400, "ssl": false}, "db_pool_size": 30, "db_pool_max_overflow": 100}[MainThread][anchore_manager.cli.utils/connect_database()][INFO] DB connection configured: True
[MainThread][anchore_manager.cli.utils/connect_database()][INFO] DB attempting to connect...
[MainThread][anchore_manager.cli.utils/connect_database()][INFO] DB connected: True
...
...
The output will indicate whether or not a database upgrade is needed. It will then prompt for confirmation if it is, and will display upgrade progress output before completing.
Specific Version Upgrades
This section is intended as a guide for any special instructions and information related to upgrading to specific versions of Enterprise.
If you are upgrading from an Anchore Enterprise version prior to 4.2.0, there is a known issue that will require you to upgrade to 4.2.0 or 4.3.0 first. Once completed, you will have no issues upgrading to 4.4.1. Please contact Anchore Support if you need further assistance.
Please Note: This issue was addressed in 4.5.0. Upgrading from a version prior to 4.2.0 will succeed in 4.5.0 and newer releases.
6.1 - v4.x --> v5.x Migration Guide
This guide will help you understand, plan, and execute the migration of your Anchore deployment from
Enterprise v4.x --> Enterprise v5.18.0
Warning
You cannot upgrade directly from 4.9.x to the latest release. You must first migrate to 5.9.0 through v2.10.0 of the Helm chart before proceeding to a later release
The Enterprise v5.x Major Release involved several breaking changes. The migration to a v5.x release can be more complex than the regular Anchore feature release upgrade.
There are four significant component changes required to migrate to Enterprise v5.18.0 that each have their own migration paths. This document will help you migrate all components in a safe and downtime-minimizing way.
The components are:
Anchore Enterprise: provides a new V2 API.
v5.18.0 only supports the new V2 API
v4.9.x supports both V1 and V2 APIs
PostgreSQL Database: required version 13+ for Enterprise v5.18.0
Enterprise Helm Chart:
v5.18.0 can be deployed only with the new enterprise Helm chart.
The older anchore-engine chart will be at end-of-life with the 4.x series.
Integrations & Clients: all Anchore-provided integrations have new released versions that are compatible with v5.18.0 and support the new V2 API.
This guide will walk you through the process to go from this starting state.
Note: The upgrade to v4.9.x is very strongly recommended for all deployments as a key part of the migration process to v5.18.0. If you use ANY integrations or API calls you should use v4.9.x and its dual-API support as the version of Anchore to run while you migrate all you integrations to use the V2 API.
Planning Your Migration
Timing: Each phase has different duration expectations, and below we’ll review the expectations and process for each phase of the migration. You should expect and plan for downtime for each phase except the client API migrations, which are done while the system is running.
The migration may be a multi-day process since it involves things like client migrations that may take days or weeks depending on your org and how many other systems are integrated with your Anchore deployment.
Combining Phases: Phases can be combined if you wish to use a smaller number of larger maintenance windows. Since
combining phases increases the complexity of each phase and associated risk of misconfigurations or errors, the combination should be carefully considered for your specific needs and risk tolerance.
Migration Path 1: Chart-Managed Database
If you have PostgreSQL deployed in Kubernetes using the Anchore-Engine Helm Chart, then this is the migration path for you.
graph
subgraph Start
%% Start at v4.8.x or earlier, using postgres 9.6 and the anchore-engine helm chart
anchore4("Enterprise <= v4.8.x")
pg9[("PostgreSQL 9.6")]
engineChart["anchore-engine chart"]
anchorectl("anchorectl v1.7.x") --V1 api calls--> anchore4
anchore4 --uses--> pg9
engineChart --deploys--> anchore4
end
subgraph step1[Latest Enterprise v4.9.x]
%% Upgrade to v4.9.x for V2 API
anchore49_1("Enterprise v4.9.x")
pg9_2[("PostgreSQL 9.6")]
engineChart1["anchore-engine chart"]
anchore49_1 --uses--> pg9_2
anchorectl3("anchorectl v1.8.x") --V1 api calls--> anchore49_1
engineChart1 --deploys--> anchore49_1
end
subgraph step2[Chart and DB Migrated]
%% Migrate to new Chart & DB Migration to PG13, no Anchore version change
anchore49("Enterprise = v4.9.x")
pg13[("PostgreSQL 13+")]
pg96[("PostgreSQL 9.6")]
engineChart2["anchore-engine chart"]
enterpriseChart["enterprise chart"]
engineChart2 --uses--> pg96
pg96 --migrates to--> pg13
anchore49 --uses--> pg13
anchorectl2("anchorectl v1.8.x") --V1 api calls--> anchore49
enterpriseChart --deploys--> anchore49
end
subgraph step3[Integrations Migrated]
%% Upgrade integrations/AnchoreCTL
anchoreInter3("Enterprise v4.9.x")
engineChart3["anchore-engine chart"]
enterpriseChart2["enterprise chart"]
pg13_4[("PostgreSQL 13+")]
pg96_2[("PostgreSQL 9.6")]
engineChart3 --> pg96_2
anchoreInter3 --> pg13_4
anchorectl5("anchorectl v4.9.x") --V2 api calls--> anchoreInter3
enterpriseChart2 --deploys--> anchoreInter3
end
subgraph finish["Enterprise v5.18.0"]
%% Upgrade to v5.18.0
anchore5("Enterprise v5.18.0")
enterpriseChart3["enterprise chart"]
pg13_5[("PostgreSQL 13+")]
anchore5 --> pg13_5
anchorectl6("anchorectl v5.18.0") --V2 api calls--> anchore5
enterpriseChart3 --deploys--> anchore5
end
Start --Upgrade Anchore Enterprise to latest v4.9.x release--> step1;
step1 --Migrate to Enterprise Chart and PG13+ DB--> step2;
step2 --Migrate integrations & anchorectl to use V2 API--> step3;
step3 --Upgrade Anchore Enterprise to v5.18.0 & delete 4.0.x deployment--> finish;
Step 1: Upgrade Anchore Enterprise to latest v4.9.x Release
Downtime: Required
Upgrade your Anchore deployment to v4.9.x. This is an important step for several reasons:
It is supported by both the legacy anchore-engine helm chart and the new enterprise helm chart
It supports PostgreSQL 9.6+ and newer (13+), so it provides a stable base to execute the other upgrade steps
It supports both the V1 and V2 APIs, so you can have a stable Anchore version for updating all your integrations
Upgrade mechanism: normal Anchore Enterprise upgrade process
Step 2: Migrate to Enterprise Chart, 4.9.x and PostgreSQL 13
Info
This does not bring you to the latest version of Anchore Enterprise. Moving to the Enterprise chart readies you for the 4.9.x to 5.x upgrade in step 4.
Step 3: Migrate all integrations and clients to V2 API compatible versions
Downtime: None for Anchore itself, but individual integrations may vary
Once your deployment is running v4.9.x you have a stable platform to migrate integrations and clients to using the V2 API of Enterprise. You should perform the upgrades/migrations for the new V2 API in this phase. This phase may last for a while and does not end until all your API calls are using the V2 endpoint instead of V1.
Integration
Recommended V2 API Compatible Version
AnchoreCTL
v4.9.0
anchore-k8s-inventory
v1.1.1
anchore-ecs-inventory
v1.2.0
Kubernetes Admission Controller
v0.5.0
Jenkins Plugin
v1.1.0
Harbor Scanner Adapter
v1.2.0
enterprise-gitlab-scan
v4.0.0
Upgrading AnchoreCTL Usage in CI
The installation script provided via Deploying AnchoreCTL will only automatically deploy new releases that are V1 API compatible, so you need to update use of that script to include specific versions.
For example, use:
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b <DESTINATION_DIR> v4.9.0
Confirming V1 API is no longer in use
To verify that all clients have been updated, you can review the logs from the API containers in your v4.9.x deployment. We recommend that you monitor for multiple days to verify there are no periodic processes that still use the old endpoint.
Step 4: Upgrade from Enterprise 4.9.x to 5.9 using 2.10.0 of the chart
You will want to install the compatible version of AnchoreCTL (v5.18.0) at this time as well:
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b <DESTINATION_DIR> v5.18.0
Migration Path 2: External DB
If you deploy PostgreSQL using any mechanism other than the Anchore-provided chart (e.g. AWS RDS, your own DB chart,
Google CloudSQL, etc.), then this is the migration plan for you.
graph
subgraph Start[Enterprise v4.x]
anchoreStart("Enterprise <= v4.8.X")
pg9[("PostgreSQL 9.6")]
engineChart["anchore-engine chart"]
anchorectl("anchorectl v1.7.x") --V1 api calls--> anchoreStart
anchoreStart --uses--> pg9
engineChart --deploys--> anchoreStart
end
subgraph step1[Latest Enterprise v4.9.x]
%% Upgrade to v4.9.x for V2
anchoreInter1("Enterprise v4.9.x")
pg9_2[("PostgreSQL 9.6")]
engineChart2["anchore-engine chart"]
anchoreInter1 --uses--> pg9_2
anchorectl3("anchorectl v1.8.x") --V1 api calls--> anchoreInter1
engineChart2 --deploys--> anchoreInter1
end
subgraph step2[Enterprise Helm Chart]
%% Use new chart
anchoreInter2("Enterprise v4.9.x")
enterpriseChart["enterprise chart"]
pg9_3[("PostgreSQL 9.6")]
anchoreInter2 --> pg9_3
anchorectl4("anchorectl v1.8.x") --V1 api calls--> anchoreInter2
enterpriseChart --deploys--> anchoreInter2
end
subgraph step3[PostgreSQL 13+]
%% Migrate to PG13+ , no Anchore version change
anchoreInter3("Enterprise = v4.9.x")
pg13[("PostgreSQL 13+")]
enterpriseChart2["enterprise chart"]
anchoreInter3 --uses--> pg13
anchorectl2("anchorectl v1.8.x") --V1 api calls--> anchoreInter3
enterpriseChart2 --deploys--> anchoreInter3
end
subgraph step4[Integrations using V2 API]
%% Upgrade integrations/AnchoreCTL
anchoreInter4("Enterprise v4.9.x")
enterpriseChart3["enterprise chart"]
pg13_4[("PostgreSQL 13+")]
anchoreInter4 --> pg13_4
anchorectl5("anchorectl v4.9.x") --V2 api calls--> anchoreInter4
enterpriseChart3 --deploys--> anchoreInter4
end
subgraph finish[Enterprise v5.18.0]
%% Upgrade to v5.18.0
anchore5("Enterprise v5.18.0")
enterpriseChart4["enterprise chart"]
pg13_5[("PostgreSQL 13+")]
anchore5 --> pg13_5
anchorectl6("anchorectl v5.18.0") --V2 api calls--> anchore5
enterpriseChart4 --deploys--> anchore5
end
Start --Upgrade to latest v4.9.x Enterprise--> step1;
step1 --Migrate to Enterprise Helm Chart--> step2;
step2 --Upgrade External DB to PostgreSQL 13+--> step3;
step3 --Migrate Integrations and AnchoreCTL to use V2 API--> step4;
step4 --Upgrade Anchore to v5.18.0 --> finish;
Step 1: Upgrade to latest Anchore Enterprise v4.9.x
Downtime: Required
Upgrade your Anchore deployment to v4.9.x. This is an important step for several reasons:
It is supported by both the legacy anchore-engine helm chart and the new enterprise helm chart
It supports PostgreSQL 9.6+ and newer (13+), so it provides a stable base to execute the other upgrade steps
It supports both the V1 and V2 APIs, so you can have a stable Anchore version for updating all your integrations
Step 2: Upgrade PostgreSQL from 9.6.x to 13+
FIPS Enabled Hosts
If Anchore Enterprise is deployed on FIPS Enabled Hosts and Amazon RDS (including GovCloud) is hosting the Anchore database, you will be required to have PostgreSQL version 16 or higher. This is due to RHEL 9 enforcing the FIPS-140-3 requirements. Amazon RDS is only supporting EMS or TLS 1.3 with the use of PostgreSQL 16 or greater.
Downtime: Required
Enterprise v5.18.0 requires PostgreSQL 13 or later to run. The DB upgrade process will be specific to your deployment mechanisms and way of running Postgres. Depending on what version of PostgreSQL you are running when you start, there may be multiple DB upgrade operations necessary in PostgreSQL to get to 13+.
However, this upgrade can be done with any Anchore version. All 4.x versions of Anchore already support PostgreSQL 13+, so the DB upgrade can be executed outside any changes to the Anchore deployment itself.
If you are using AWS RDS or another cloud platform for hosting your PostgreSQL database, please refer to their upgrade
documentation for the best practices to upgrade your instance(s) to version 13 or higher.
Step 3: Migrate to Enterprise Helm Chart
Info
This does not bring you to the latest version of Anchore Enterprise. Moving to the Enterprise chart readies you for the 4.9.x to 5.x upgrade in step 4.
Step 4: Upgrade all your integrations/clients to use the V2 API
Downtime: None for Anchore itself, but individual integrations may vary
Once your deployment is running v4.9.x you have a stable platform to migrate integrations and clients to using the V2 API of Enterprise. You should perform the upgrades/migrations for the new V2 API in this phase. This phase may last for a while and does not end until all your API calls are using the V2 endpoint instead of V1.
Integration
Recommended V2 API Compatible Version
AnchoreCTL
v4.9.0
anchore-k8s-inventory
v1.1.1
anchore-ecs-inventory
v1.2.0
Kubernetes Admission Controller
v0.5.0
Jenkins Plugin
v1.1.0
Harbor Scanner Adapter
v1.2.0
enterprise-gitlab-scan
v4.0.0
Upgrading AnchoreCTL Usage in CI
The installation script provided via Deploying AnchoreCTL will only automatically deploy new releases that are V1 API compatible, so you need to update use of that script to include specific versions.
For example, use:
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b <DESTINATION_DIR> v4.9.0
Confirming V1 API is no longer in use
To verify that all clients have been updated, you can review the logs from the API containers in your v4.9.x deployment. We recommend that you monitor for multiple days to verify there are no periodic processes that still use the old endpoint.
Step 5: Upgrade from Enterprise 4.9.x to 5.9 using 2.10.0 of the chart
You will want to install the compatible version of AnchoreCTL (v5.18.0) at this time as well:
curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b <DESTINATION_DIR> v5.18.0
Verifying the Upgrade
Verify the version you’re using of AnchoreCTL
anchorectl version – All users should see v5.18.0 for the AnchoreCTL version
anchorectl system status – The system should return v5.18.0
7 - Anchore Secure - Vulnerability Management
Vulnerability management is the practice of identifying, categorising and remediating security vulnerabilities in software. Using a Software Bill of Materials (SBOM) as a foundation, Anchore Enterprise provides a mechanism for scanning containerized software for security vulnerabilities. With 100% API coverage and fully-documented APIs, users can automate vulnerability scanning and monitoring - performing scans in CI/CD pipelines, registries and Kubernetes platforms. Furthermore, Anchore Enterpise allows users to identify malware, secrets, and other security risks through catalogers.
Jump into a particular topic using the links below:
Vulnerability matching in Anchore Enterprise and Grype begins with collecting vulnerability data from multiple sources to identify vulnerabilities in the packages cataloged within an SBOM.
Anchore Enterprise and Grype consolidate data from these sources into a format suitable for vulnerability identification in SBOMs. One key source of data is the National Vulnerability Database (NVD). The NVD serves as a widely recognized, vendor-independent resource for vulnerability identification. Additionally, it provides a framework for measuring the severity of vulnerabilities. For instance, the NVD introduced the Common Vulnerability Scoring System (CVSS), which assigns numerical scores ranging from 0 to 10 to indicate the severity of vulnerabilities. These scores help organizations prioritize vulnerabilities based on their potential impact.
However, due to known limitations with NVD data, relying on additional sources becomes essential. Anchore Enterprise and Grype also collect vulnerability data from vendor-specific databases, which play a crucial role in accurate and efficient detection. These sources enable vulnerability matching from the vendor’s perspective. Examples of such vendor-specific databases include GitHub, the Microsoft Security Response Center (MSRC), and the Red Hat Security Response Database, among others.
Data Import & Normalization
Anchore has a tool called vunnel that is responsible for reaching out to various data sources, parsing and normalizing that data, then storing it for future use.
There is not one standard format for publishing vulnerability data, and even when there is a standardized data format, such as OVAL or OSV, those formats often have minor incompatible differences in their implementation. The purpose of vunnel is to understand each data source then output a single consistent format that can be used to construct a vulnerability database.
Providers
The process begins with vunnel reaching out to vulnerability data sources. These sources are known as “providers”. The following are a list of vunnel Providers:
Alpine: Focuses on lightweight Linux distributions and provides vulnerability data tailored specifically to Alpine packages.
Amazon: Offers vulnerability data for its cloud services and Linux distributions, such as Amazon Linux.
Chainguard: Specializes in securing software supply chains and delivers vulnerability insights for containerized environments.
Debian: Maintains a robust security tracker for vulnerabilities in its packages, concentrating on open-source software used in Debian-based systems.
GitHub: Provides vulnerability data supported by an extensive advisory database for developers.
Mariner (CBL-Mariner): Microsoft’s Linux distribution, provides vulnerability data within its ecosystem.
NVD (National Vulnerability Database): Serves as the official U.S. government repository of vulnerability information.
Oracle: Tracks vulnerabilities in Oracle Linux and other Oracle products, focusing on enterprise environments.
RHEL (Red Hat Enterprise Linux): Delivers detailed and timely vulnerability data for Red Hat products.
SLES (SUSE Linux Enterprise Server): Offers vulnerability data for SUSE Linux products, with a strong focus on enterprise solutions, particularly in cloud and container environments.
Ubuntu: Maintains a well-documented vulnerability tracker and provides regular security updates for its popular Linux distribution.
Wolfi: It is a community-driven, secure-by-default Linux-based distribution that emphasizes supply chain security and provides reliable vulnerability tracking.
vunnel reaches out to all of these providers, collates vulnerability data and consolidates it for use. The end product of the operations of vunnel is what we call the Grype database (GrypeDB).
Building GrypeDB
When the data from vunnel is collected into a database, we call that GrypeDB. This is a sqlite database that is used by both Grype and Anchore Enterprise for matching vulnerabilities. The Anchore Enterprise database and the Grype database (consolidated by vunnel) are not the same data. The hosted Anchore Enterprise database contains the consolidated GrypeDB as well as the Exclusion database and Microsoft MSRC vulnerability data.
Non-Anchore (upstream) Data Updates
When there are problems with other data sources, we contact those upstream sources and work with them to correct issues. Anchore has an “upstream first” policy for data corrections. Whenever possible we will work with upstream data sources rather than trying to correct only our data. We believe this creates a better overall vulnerability data ecosystem, and fosters beneficial collaboration channels between Anchore and the upstream projects.
An example of how we submit upstream data updates can be seen in the GitHub Advisory Database HERE
Data Enrichment
Due to the known issues with the NVD, Anchore Enterprise enhances the quality of its data for analysis by enriching the information obtained from the NVD. This process involves human intervention to review and correct the data. Once this manual process is completed, the cleaned and refined data is stored in the Anchore Enrichment Database.
The Anchore Enriched Data can be reviewed in GitHub HERE.
The scripts that drive this enrichment process are also in GitHub HERE
Before implementing this process, correcting NVD data was a challenge. However, with our enrichment process, we now have the flexibility to make changes to affected products and versions. The key advantage is that the data used by Anchore Enterprise is now highly reliable, ensuring that any downloaded data is accurate and free from the common issues associated with NVD data.
An example of enriching NVD data for more accurate detection is CVE-2024-4030, which was initially identified as affecting Debian when it actually only impacts Windows. By applying our enrichment process, we were able to correct this error.
Vulnerability Matching Process
When it’s time to compare the data from an SBOM to the vulnerability data constructed into Anchore Enterprise, we call that matching. Anytime vulnerability data is surfaced in Anchore Enterprise, that data is the result of vulnerability matches.
CPE Matching
CPE which stands for Common Platform Enumeration is a structured naming scheme standardized by the National Institute of Standards and Technology (NIST) to describe software, hardware, and firmware. It uses a standardized format that helps tools and systems compare and identify products efficiently.
CPE matching involves comparing the CPEs found in the SBOM of a software product against a list of known CPE entries to find a match. The diagram below illustrates the steps involved in CPE matching.
Due to the current state of the NVD data as mentioned above, CPE matching can sometimes lead to false positives. This led to the creation of the exclusions dataset that we manage in Anchore Enterprise. Vulnerability matching can be further tuned through our ability to disable CPE matching for a supported ecosystem.
Vulnerability Match Exclusions
There are times we cannot solve a false positive match using data alone. This is generally due to limitations of how CPE matching works. In those instances, Anchore Enterprise has a concept called Vulnerability Match Exclusions. These exclusions allow us to remove a vulnerability from the findings for a specific set of match criteria.
The data for the vulnerability match exclusions is held in a private repository. The data behind this list is not included in the open source Grype vulnerability data.
For example, if we look at CVE-2012-2055 which is a vulnerability reported against the GitHub product. When trying to match a CPE against this CVE, CPE is unable to capture this level of detail. GitHub libraries for different ecosystems will show up as affected. The Python GitHub library is an example. In order to resolve this, we exclude the language ecosystems using a match exclusion.
The matching process that happens against an SBOM is the same basic process in both Grype and Anchore Enterprise. The vulnerability data is stored in GrypeDB, details such as vulnerability ID, package and versions affected as well as fix information are part of these records.
For example for vulnerability CVE-2024-9823 we store the package name, Jetty, the fixed version, 9.4.54, and which ecosystems are affected, such as Debian, NVD, and Java. We call these ecosystems a namespace in the context of a match.
The namespace used for the match is determined by the package stored in the SBOM. For a Debian package, the Debian namespace would be used, for Java - GitHub will be used by default in Grype, but NVD will be used by default in Anchore Enterprise. The default matcher for Java can be changed in Anchore Enterprise, we encourage you to do so as it will result in higher quality matches. We will be changing this default in a future release. See disabling CPE matching per supported ecosystem
The details about the versions affected will be used to determine if the version reported by the SBOM falls within the affected range. If it does, the vulnerability matches.
For a successful match, the fixed details field will be used to display which version fixes a particular vulnerability. The fix details are specific to each namespace. The version in Debian that fixes this vulnerability, 9.4.54-1, is not the same as the version that fixes the Java package, 9.4.54.
It should also be noted that if a vulnerability appears on the match exclusion list, it would be removed as a match.
Once a match exists, then additional metadata can be surfaced. We store details such as severity and CVSS in this table. Sometimes a field could be missing, such as severity or CVSS. Missing fields will be filled in with the data from NVD if it is available there.
Vulnerability Matching Configuration
Search by CPE can be globally configured per supported ecosystem via the anchore enterprise policy engine config. The default enables search by cpe for all ecosystems except for javascript (since NPM package vulnerability reports are exhaustively covered by the GitHub Security Advisory Database).
A fully-specified default config is as below:
policy_engine:vulnerabilities:matching:default:search:by_cpe:enabled:trueecosystem_specific:dotnet:search:by_cpe:enabled:truegolang:search:by_cpe:enabled:truejava:search:by_cpe:enabled:truejavascript:search:by_cpe:enabled:falsepython:search:by_cpe:enabled:trueruby:search:by_cpe:enabled:truestock:search:by_cpe:# Disabling search by CPE for the stock matcher will entirely disable binary-only matches# and is *NOT ADVISED*enabled:true
When matching vulnerabilities against a Linux distribution, such as Alpine, Red Hat, or Ubuntu, there is a concept we call “comprehensive distribution”. A comprehensive distribution reports both fixed and unfixed vulnerabilities in their data feed.
For example, Red Hat reports on all vulnerabilities, including unfixed vulnerabilities. Some distros, like Alpine, do not report unfixed vulnerabilities. When a distribution does not contain comprehensive vulnerability information, we fall back to other data sources as a best effort to determine vulnerabilities that affect Alpine and are not fixed yet.
Fix Details
There are some additional details for the fixed data from NVD that should be explained. NVD doesn’t contain explicit fix information for a given vulnerability. Other namespaces do, such as GitHub and Debian. There is a concept of “Less Than” and “Less Than or Equal” in the NVD data. When a vulnerability is tagged with “Less Than or Equal”, it could mean there is no fix available, or the fix couldn’t be figured out, or a fix was unavailable at the time NVD looked at it. In those cases we cannot show fix details for a vulnerability match.
If NVD uses “Less Than”, it is assumed that the version noted is the fixed version, unless that version is part of the affected range of a subsequent CPE configuration for the same CVE. We will present that version as containing the fix.
For example if we see data that looks like this:
some_package LessThan 1.2.3
We would assume version 1.2.3 contains the fix, and any version less than that, such as 1.2.2 is vulnerable. Alternatively, if we see:
some_package LessThanOrEqual 1.2.2
We know version 1.2.2 and below are vulnerable. We however do not know which version contains the fix. It could be in version 1.3.0, or 1.2.3, or even 2.0.0. In these cases we do not surface fixed details. If we are able to figure out such details in the future, we will update our CVE data.
In this section you will learn how to analyze images with Anchore Enterprise using AnchoreCTL in two different ways:
Distributed Analysis: Content analysis by AnchoreCTL where it is run and importing the analysis to your Anchore deployment
Centralized Analysis: The Anchore deployment downloads and analyzes the image content directly
Using AnchoreCTL for Centralized Analysis
Overview
This method of image analysis uses the Enterprise deployment itself to download and analyze the image content. You’ll use AnchoreCTL to make API requests to Anchore to tell it which image to analyze but the Enterprise deployment does the work.
You can refer to the Image Analysis Process document in the concepts section to better understand how centralized analysis works in Anchore.
sequenceDiagram
participant A as AnchoreCTL
participant R as Registry
participant E as Anchore Deployment
A->>E: Request Image Analysis
E->>R: Get Image content
R-->>E: Image Content
E->>E: Analyze Image Content (Generate SBOM and secret scans etc) and store results
E->>E: Scan sbom for vulns and evaluate compliance
Usage
The anchorectl image add command instructs the Anchore Enterprise deployment to pull (download) and analyze an image from a registry. Anchore Enterprise will attempt to retrieve metadata about the image from the Docker registry and if successful will initiate a pull of the image and queue the image for analysis. The command will output details about the image including the image digest, image ID, and full name of the image.
For an image that has not yet been analyzed, the status will appear as not_analyzed. Once the image has been downloaded it will be queued for analysis. When the analysis begins the status will be updated to analyzing, after which te status will update to analyzed.
Anchore Enterprise can be configured to have a size limit for images being added for analysis. Attempting to add an image that exceeds the configured size will fail, return a 400 API error, and log an error message in the catalog service detailing the failure. This feature is disabled by default so see documentation for additional details on the functionality of this feature and instructions on how to configure the limit.
Using AnchoreCTL for Distributed Analysis
Overview
This way of adding images uses anchorectl to performs analysis of an image outside the Enterprise deployment, so the Enterprise deployment never
downloads or touches the image content directly. The generation of the SBOM, secret searches, filesystem metadata, and content searches are all
performed by AnchoreCTL on the host where it is run (CI, laptop, runtime node, etc) and the results are imported to the Enterprise deployment where it can be scanned for vulnerabilities and evaluated against policy.
sequenceDiagram
participant A as AnchoreCTL
participant R as Registry/Docker Daemon
participant E as Anchore Deployment
A->>R: Get Image content
R-->>A: Image Content
A->>A: Analyze Image Content (Generate SBOM and secret scans etc)
A->>E: Import SBOM, secret search, fs metadata
E->>E: Scan sbom for vulns and evaluate compliance
Configuration
Enabling the full set of analyzers, “catalogers” in AnchoreCTL terms, requires updates to the config file used by AnchoreCTL. See Configuring AnchoreCTL for more information on the format and options.
Usage
Note
To locally analyze an image that has been pushed to a registry, it is strongly recommended to use the ‘–from registry’ rather than ‘–from docker’.
This removes the need to have docker installed and also results in a consistent image digest for later use. The registry option gives anchorectl access
to data that the docker source does not due to limitations with the Docker Daemon itself and how it handles manifests and image digests.
The anchorectl image add --from [registry|docker] command will run a local SBOM-generation and analysis (secret scans, filesystem metadata, and content searches) and upload the result to Anchore Enterprise without ever having that image touched or loaded by your Enterprise deployment.
For an image that has not yet been analyzed, the status will appear as not_analyzed. Once the image has been downloaded it will be queued for analysis. When the analysis begins the status will be updated to analyzing, after which te status will update to analyzed.
The ‘–platform’ option in distributed analysis specifies a different platform than the local hosts’ to use when retrieving the image from the registry for analysis by AnchoreCTL.
For images that you are building yourself, the Dockerfile used to build the image should always be passed to Anchore Enterprise at the time of image addition. This is achieved by adding the image as above, but with the additional option to pass the Dockerfile contents to be stored with the system alongside the image analysis data.
To update an image’s Dockerfile, simply run the same command again with the path to the updated Dockerfile along with ‘–force’ to re-analyze the image with the updated Dockerfile. Note that running add without --force (see below) will not re-add an image if it already exists.
Providing Dockerfile content is supported in both push and pull modes for adding images.
Additional Options
When adding an image, there are some additional (optional) parameters that can be used. We show some examples below and all apply to both distributed and centralize analysis workflows.
the --force option can be used to reset the image analysis status of any image to not_analyzed, which is the base analysis state for an image. This option shouldn’t be necessary to use in normal circumstances, but can be useful if image re-analysis is needed for any reason desired.
the --annotation parameter can be used to specify ‘key=value’ pairs to associate with the image at the time of image addition. These annotations will then be carried along with the tag, and will appear in image records when fetched, and in webhook notification payloads that contain image information when they are sent from the system. To change an annotation, simply run the add command again with the updated annotation and the old annotation will be overriden.
the ‘–no-auto-subscribe’ flag can be used if you do not wish for the system to automatically subscribe the input tag to the ’tag_update’ subscription, which controls whether or not the system will automatically watch the added tag for image content updates and pull in the latest content for centralized analysis. See Subscriptions for more information about using subscriptions and notifications in Anchore.
These options are supported in both distributed and centralized analysis.
Image Tags
In this example, we’re adding docker.io/mysql:latest, if we attempt to add a tag that mapped to the same image, for example docker.io/mysql:8 Anchore Enterprise will detect the duplicate image identifiers and return a detail of all tags matching that image.
The following command instructs Anchore Enterprise to delete the image analysis from the working set using a tag. The --force option must be used if there is only one digest associated with the provided tag, or any active subscriptions are enabled against the referenced tag.
Anchore Enterprise also allows adding images directly by digest / tag / timestamp tuple, which can be useful to add images that are still available in a registry but not associated with a current tag any longer.
To add a specific image by digest with the tag it should be associated with:
Note: this will submit the specific image by digest with the associated tag, but Anchore will treat that digest as the most recent digest for the tag, so if the image registry actually has a different history (e.g. a newer image has been pushed to that tag), then the tag history in Anchore may not accurately reflect the history in the registry.
During the analysis of container images, Anchore Enterprise performs deep inspection, collecting data on all artifacts in the image including files, operating system packages and software artifacts such as Ruby GEMs and Node.JS NPM modules.
Inspecting images
The image content command can be used to return detailed information about the content of the container image.
AnchoreCTL will output a subset of fields from the content view, for example for files on the file name and size are displayed. To retrieve the full output the --json parameter should be passed.
The INPUT_IMAGE can be specified in one of the following formats:
Image Digest
Image ID
registry/repo:tag
The VULN_TYPE currently supports:
os: Vulnerabilities against operating system packages (RPM, DPKG, APK, etc.)
non-os: Vulnerabilities against language packages (NPM, GEM, Java Archive (jar, war, ear), Python PIP, .NET NuGet, etc.)
all: Combination report containing both ‘os’ and ’non-os’ vulnerability records.
The system has been designed to incorporate 3rd party feeds for other vulnerabilites.
Examples
To generate a report of OS package (RPM/DEB/APK) vulnerabilities found in the image including CVE identifier, Vulnerable Package, Severity Level, Vulnerability details and version of fixed package (if available).
# anchorectl image vulnerabilities debian:latest -t os
Currently the following the system draws vulnerability data specifically matched to the following OS distros:
Alpine
CentOS
Debian
Oracle Linux
Red Hat Enterprise Linux
Red Hat Universal Base Image (UBI)
Ubuntu
Suse Linux
Amazon Linux 2
Google Distroless
To generate a report of language package (NPM/GEM/Java/Python) vulnerabilities, the system draws vulnerability data from the NVD data feed, and vulnerability reports can be viewed using the ’non-os’ vulnerability type:
To generate a list of all vulnerabilities that can be found, regardless of whether they are against an OS or non-OS package type, the ‘all’ vulnerability type can be used:
# anchorectl image vulnerabilities node:latest -t all
Finally, for any of the above queries, these commands (and other anchorectl commands) can be passed the -o json flag to output the data in JSON format:
# anchorectl -o json image vulnerabilities node:latest -t all
Other options can be reviewed by issuing anchorectl image vulnerabilities --help at any time.
Next Steps
Subscribe to receive notifications when the image is updated, when the policy status changes or when new vulnerabilities are detected.
7.1.2 - Image Analysis via UI
Overview
In this section you will learn how to submit images for analysis using the user
interface, and how to execute a bulk removal of pending items or
previously-analyzed items from within a repository group.
Getting Started
From within an authenticated session, click the Image Analysis button on the
navigation bar:
You will be presented with the Image Analysis view. On the right-hand side
of this view you will see the Analyze Repository and Analyze Tag buttons:
These controls allow you to add entire repositories or individual items to
the Anchore analysis queue, and to also provide details about how you would like
the analysis of these submissions to be handled on an ongoing basis. Both
options are described below in the following sections.
Analyze a Repository
After clicking the Analyze Repository button, you are presented with the
following dialog:
The following fields are required:
Registry—for example: docker.io
Repository—for example: library/centos
Provided below these fields is the Watch Tags in Repository configuration
toggle. By default, when One-Time Tag Analysis is selected all tags
currently present in the repository will be analyzed; once initial analysis is
complete the repository will not be watched for future additions.
Setting the toggle to Automatically Check for Updates to Tags specifies that
the repository will be monitored for any new tag additions that take place
after the initial analysis is complete. Note that you are also able to set
this option for any submitted repository from within the Image Analysis
view.
Once you have populated the required fields and click OK, you will be
notified of the overhead of submitting this repository by way of a count that
shows the maximum number of tags detected within that repository that will be
analyzed:
You can either click Cancel to abandon the repository analysis request at
this point, or click OK to proceed, whereupon the specified repository will
be flagged for analysis.
Max image size configuration applies to repositories added via UI. See max image size
Analyze a Tag
After clicking the Analyze Tag button, you are presented with the
following dialog:
The following fields are required:
Registry—for example, docker.io
Repository—for example, library/centos
Tag—for example, latest
Some additional options are provided on the right-hand side of the dialog:
Watch Tag—enabling this toggle specifies that the tag should be
monitored for image updates on an ongoing basis after the initial analysis
Force Reanalysis—if the specified tag has already been analyzed, you can
force re-analysis by enabling this option. You may want to force re-analysis if
you decide to add annotations (see below) after the initial analysis. This
option is ignored if the tag has not yet been analyzed.
Add Annotation—annotations are optional key-pair values that can be
added to the image metadata. They are visible within the Overview tab of
the Image Analysis view once the image has been analyzed, as well as from
within the payload of any webhook notification from Anchore that contains image
information.
Also note that there is a section here for you to upload Dockerfiles. When you provide the Dockerfile for an image here, if the image has already been analyzed before - you will need to make sure the ‘Force Reanalysis’ box is ticked. Once the Dockerfile is added, you can find and view it in the Build Summary of the image.
Once you have populated the required fields and click OK, the specified tag
will be scheduled for analysis.
Max image size configuration applies to images added via UI. See max image size
View and download Vulnerability
To view the vulnerability details, follow these steps:
Click on the image icon.
Under the Repository column, click the hyperlink of the repository where the desired image is located.
You will then see all the images in the selected repository. Under the Most Recently Analyzed Image Digest column, click on the image digest for which you want to view or download the vulnerabilities.
This takes you to the Policy compliance page.
On this page click on Vulnerabilities
After clicking the vulnerabilities icon, you will see a list of vulnerabilities for that image. To download the vulnerabilities, click on the Vulnerability Report icon and choose either the JSON or CSV format.
Repository Deletion
Shown below is an example of a repository view under Image Analysis:
From a repository view you can carry out actions relating to the bulk removal of
items in that repository. The Analysis Cancellation / Repository Removal
control is provided in this view, adjacent to the analysis controls:
After clicking this button you are presented with the following options:
Cancel Images Currently Pending Analysis—this option is only enabled if
you have one or more tags in the repository view that are currently scheduled
for analysis. When invoked, all pending items will be removed from the queue.
This option is particularly useful if you have selected a repository for
analysis that contains many tags, and the overall analysis operation is
taking longer than initially expected.
Note: If there is at least one item present in the repository that is
not pending analysis, you will be offered the opportunity to decide if you
want the repository to be watched after this operation is complete.
Remove Repository and Analyzed Items—In order to remove a repository from
the repository view in its entirety, all items currently present within the
repository must first be removed from Anchore. When invoked, all items (in any
state of analysis) will be removed. If the repository is being watched, this
subscription is also removed.
7.2 - Scanning Repositories
Introduction
Individual images can be added to Anchore Enterprise using the image add command. This may be performed by a CI/CD plugin such as Jenkins or manually by a user with the UI, AnchoreCTL or API.
Anchore Enterprise can also be configured to scan repositories and automatically add any tags found in the repository.
This is referred to as a Repository Subscription. Once added, Anchore Enterprise will periodically check the
repository for new tags and add them to Anchore Enterprise. For more details on the Repository Subscription, please see
Subscriptions
Note When you add a registry to Anchore, no images are pulled automatically. This is to prevent your Anchore deployment from being overwhelmed by a very large number of images.
Therefore, you should think of adding a registry as a preparatory step that allows you to then add specific repositories or tags without having to provide the access credentials for each.
Because a repository typically includes a manageable number of images, when you add a repository to Anchore images, all tags in that repository are automatically pulled and analyzed by Anchore.
For more information about managing registries, see Managing Registries.
Adding Repositories
The repo add command instructs Anchore Enterprise to add the specified repository watch list.
Once added, Anchore Enterprise will identify the list of tags within the repository and add them to the catalog to be analyzed.
There is an option to exclude existing tags from being added to the system. This is useful when you want to watch for
and add only new tags to the system without adding tags that are already present. To do this, use the --exclude-existing-tags option.
Also by default Anchore Enterprise will automatically add the discovered tags to the list of subscribed tags
( see Working with Subscriptions ). However, this
behavior can be overridden by passing the --auto-subscribe=<true|false> option.
Listing Repositories
The repo list command will show the repositories monitored by Anchore Enterprise.
The del option can be used to instruct Anchore Enterprise to remove the repository from the watch list. Once the repository record has been deleted no further changes to the repository will be detected by Anchore Enterprise.
Note: No existing image data will be removed from Anchore Enterprise.
# anchorectl repo del docker.io/alpine
✔ Deleted repo
No results
Unwatching Repositories
When a repository is added, Anchore Enterprise will monitor the repository for new and updated tags. This behavior can be disabled preventing Anchore Enterprise from monitoring the repository for changes.
In this case the repo list command will show false in the Watched column for this registry.
The repo watch command instructs Anchore Enterprise to monitor a repository for new and updated tags. By default repositories added to Anchore Enterprise are automatically watched. This option is only required if a repository has been manually unwatched.
As of v3.0, Anchore Enterprise can be configured to have a size limit for images being added for analysis. This feature applies to the repo watcher. Images that exceed the max configured size in the repo being watched will not be added and a message will be logged in the catalog service. This feature is disabled by default so see documentation for additional details on the functionality of this feature and instructions on how to configure the limit
Removing a Repository and All Images
There may be a time when you wish to stop a repository analysis when the analysis is running (e.g., accidentally watching an image with a large number of tags). There are several steps in the process which are outlined below. We will use docker.io/library/alpine as an example.
Note: Be careful when deleting images. In this flow, Anchore deletes the image, not just the repository/tag combo. Because of this, deletes may impact more than the expected repository since an image may have tags in multiple repositories or even registries.
Check the State
Take a look at the repository list.
anchorectl repo list
✔ Fetched repos
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
└──────────────────┴─────────────┴────────┘
Also look at the image list.
anchorectl image list | grep docker.io/alpine
✔ Fetched images
│ docker.io/alpine:20220328 │ sha256:c11c38f8002da63722adb5111241f5e3c2bfe4e54c0e8f0fb7b5be15c2ddca5f │ not_analyzed │ active │
│ docker.io/alpine:3.16.0 │ sha256:4ff3ca91275773af45cb4b0834e12b7eb47d1c18f770a0b151381cd227f4c253 │ not_analyzed │ active │
│ docker.io/alpine:20220316 │ sha256:57031e1a3b381fba5a09d5c338f7dbeeed2260ad5100c66b2192ab521ae27fc1 │ not_analyzed │ active │
│ docker.io/alpine:3.14.5 │ sha256:aee6c86e12b609732a30526ddfa8194e4a54dc5514c463e4c2e41f5a89a0b67a │ not_analyzed │ active │
│ docker.io/alpine:3.15.5 │ sha256:26284c09912acfc5497b462c5da8a2cd14e01b4f3ffa876596f5289dd8eab7f2 │ not_analyzed │ active │
...
...
Removing the Repository from the Watched List
Unwatch docker.io/library/alpine to prevent future automatic updates.
Subscribe to receive notifications when the image is updated, when the policy status changes, or when new vulnerabilites are detected.
7.3 - Runtime Inventory
Anchore Enterprise allows you to navigate through your Kubernetes clusters to quickly and easy asses your vulnerabilities, apply policies, and take action on them. You’ll need to configure your clusters for collection before being able to take advantage of these features. See our installation instructions to get setup.
Watching Clusters and Namespaces
Users can opt to automatically scan all the images that are deployed to a specific cluster or namespace. This is helpful to monitor
your overall security posture in your runtime and enforce policies. Before opting to subscribe to a new cluster, it’s important to ensure you have proper credentials saved in Anchore to pull the images from the registry. Also watching a new cluster can create a considerable queue of images to work through and impact other users of your Anchore Enterprise deployment.
Using Charts Filters
The charts at the top of the UI provide key contextual information about your runtime. Upon landing on the page you’ll see a summary of your policy evaluations and vulnerabilities for all your clusters. Drilling down into a cluster or namespace will update these charts to represent the data for the selected cluster and/or namespace. Additionally, users can select to only view clusters or namespaces with the selected filters. For example selecting only high and critical vulnerabilities will only show the clusters and/or namespaces that have those vulnerabilities.
Using Views
In addition to navigating your runtime inventory by clusters and namespaces, users can opt to view the images or vulnerabilities across. This is a great way to identify vulnerabilities across your runtime and asses their impact.
Assessing impact
Another important aspect of the Kubernetes Inventory UI is the ability to assess how a vulnerability in a container images impacts your environment. For every container when you see a note about it usage being seen in particular cluster and X more… you will be able to mouse over the link for a detailed list of where else that container image is being used. This is fast way to determine the “blast-radius” of a vulnerability.
Data Delays
Due to the processing required to generate the data used by the Kubernetes Inventory UI, the results displayed may not be fully up to date. The overall delay depends on the configuration of how often inventory data is collected, and how frequently your reporting data is refreshed. This is similar to delays present on the dashboard.
Policy and Account Considerations
The Kubernetes Inventory is only available for an account’s default policy. You may want to consider setting up an account specifically for tracking your Kubernetes Inventory and enforcing a policy.
7.4 - Working with Subscriptions
Introduction
Anchore Enterprise supports 7 types of subscriptions.
Tag Update
Policy Update
Vulnerability Update
Analysis Update
Alerts
Repository Update
Runtime Inventory
For detail information about Subscriptions please see Subscriptions
Managing Subscriptions
Subscriptions can be managed using AnchoreCTL.
Listing Subscriptions
Running the subscription list command will output a table showing the type and status of each subscription.
Any new tag added to Anchore Enterprise by AnchoreCTL will, by default, enable the Tag Update Subscription.
If you do to need this functionality, you can use the flag --no-auto-subscribe or set the environment variable ANCHORECTL_IMAGE_NO_AUTO_SUBSCRIBE when adding new tags.
AnchoreCTL provides commands to help navigate the runtime_inventory Subscription. The subscription will monitor a specify runtime inventory context and add its images to the system for analysis.
Listing Inventory Watchers
# ./anchorectl inventory watch list
✔ Fetched watches
┌──────────────────────────┬───────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────┼───────────────────┼────────┤
│ cluster-one/my-namespace │ runtime_inventory │ false │
└──────────────────────────┴───────────────────┴────────┘
Activating an Inventory Watcher
Note: This command will create the subscription is one does not already exist.
Webhooks are configured in the Anchore Enterprise configuration file config.yaml In the sample configuration file webhooks are disabled (commented) out.
The webhooks can, optionally, pass basic credentials to the webhook endpoint, if these are not required the the webhook_user and webhool_pass entries can be commented out. By default TLS/SSL connections will validate the certificate provided. This can be suppressed by uncommenting the ssl_verify option.
If configured, the general webook will receive all notifications (policy_eval, tag_update, vuln_update) for each user.In this case <notification_type> will be replaced by the appropriate type. will be replaced by the configured user which is, by default, admin. eg. http://localhost:9090/general/vuln_update/admin'
Specific endpoints for each event type can be configured, for example an endpoint for policy_eval notifications. In these cases the url, username, password and SSL/TLS verification can be specified.
This webook, if configured, will send a webhook if any CRITICAL system events are logged.
7.5 - Reporting & Remediation
Once you have identified vulnerabilities against software in a container image, the next step is to remediation. This section covers typical usage patterns for reporting on vulnerabilities and running possible workflows for remediation.
Matching
On occasion, you may see a vulnerability identified by GHSA (GitHub Security Advisory) instead of CVE (Common Vulnerability Enumeration). The reason for this is that Anchore uses an order of precedence to match vulnerabilities from feeds. Anchore gives precedence to OS and third-party package feeds which often contain more up-to-date information and provide more accurate matches with image content. However, these feeds may provide GHSA vulnerability IDs instead of CVEs as provided by NVD (National Vulnerability Database) feeds.
The vulnerability ID Anchore reports depends on how the vulnerability is matched. The order of precedence is packages installed by OS package managers, then third-party packages (java, python, node), and then NVD. The GHSA feeds tend to be ahead of the NVD feeds, so there may be some vulnerabilities that match a GHSA before they match a CVE from NVD.
We are working to unify the presentation of vulnerability IDs to keep things more consistent. Currently our default is to report the CVE unless the GHSA provides a more accurate match.
Reporting
The Reports tab is your gateway to producing insights into the collective status of your container image environment based on the back-end Enterprise Reporting Service.
Note: Because the reporting data cycle is configurable, the results shown in this view may not precisely reflect actual analysis output at any given time.
For more information on how to modify this cycle or the Reporting Service in general, please refer to the Reporting Service documentation.
Custom Reports
The Report feature provides the tools to create custom reports, set a report to run on a schedule (or store the report for future use), and get notified when they’re executed in order to receive the insights you’re interested in for account-wide artifacts.
In addition, you can create user templates (also known as custom templates) that use any of the preconfigured system templates offered with the application as their basis, or create your own templates from scratch. Templates provide the structure and filter definitions the application uses in order to generate reports.
To jump to a particular guide, select from the following below:
The New Reports tab in the Reports view is where you can create a new report, either on
an ad-hoc basis for immediate download, or for it to be saved for future use. Saved
reports can be executed immediately, scheduled, or both.
Note: The New Reports tab will be the default tab selected in the Reports view when you don’t yet have any saved reports.
Reports created in this view are based on templates. Templates provide the output structure and filter definitions the user can configure in order for the application to generate the shape of the report. Anchore Enterprise client provides immediate access to a number of preconfigured system templates that can be used as the basis for user templates. For more information on how to create and manage templates, please refer to the Templates documentation.
Creating a Report
The initial view of the New Reports tab is shown below:
In the above view you can see that the application is inviting you to select a template
from the dropdown menu. You can either select an item from this dropdown or click in the field itself and enter text in order to filter the list.
Once a template is selected, the view will change to show the available filters for the selected template. The following screenshot shows the view after selecting the Artifacts by Vulnerability template:
At this point you can click Preview Report to see the summary output and download the information, or you can refine the report by adding filters from the associated dropdown. As with the template selection, you can either select an item from the dropdown or click in the field itself and enter text in order to filter the list.
After you click the Preview Report button, you are presented with the summary output and the ability to download the report in a variety of formats:
At this point you can click any of the filters you applied in order to adjust them (or remove them entirely). The results will update automatically. If you want to add more filters you can click the [ Edit ] button and select more items from the available options and then click Preview Report again to see the updated results.
You can now optionally configure the output information by clicking the [ Configure Columns ] button. The resulting popup allows you to reorder and rename the columns, as well as remove columns you don’t want to see in the output or add columns that are not present by default:
Once you’re satisfied with the output, click Download Full Report to download the report in the selected format. The formats provided are:
CSV - comma-separated values, with all nested objects flattened into a linear list of items
Flat JSON - JavaScript object notation, with all nested objects flattened into a linear list of items
Raw JSON - JavaScript object notation, with all nested objects preserved
Saving a Report
The above describes the generation of an ad-hoc report for download, which may
be all you need. However, you can also save the report for future use. To do so, click the Save Report button. The following popup will appear:
Provide a name and optional description for the report, and then select
whether you want to save the report and store results immediately, set it to run on a schedule, or both. If you select the Generate Report option, you can then select the frequency of the report generation. Once you’re satisfied with the configuration, click Save.
The saved report will be stored under Saved Reports and you will immediately be transitioned to this view on success. The features within this
view are described in the Saved Reports section.
7.5.2 - Quick Report
Overview
Generate a report utilizing the back-end Enterprise Reporting Service through a variety of formats - table, JSON, and CSV. If you’re interested in refining your results, we recommend using the plethora of optional filters provided.
Note: Because the reporting data cycle is configurable, the results shown in this view may not precisely reflect actual analysis output at any given time.
For more information on how to modify this cycle or the Reporting Service in general, please refer to the Reporting Service documentation.
The following sections in this document describe how to select a query, add optional filters, and generate a report.
Reports
Selecting a Query
To select a query, click the available dropdown present in the view and select the type of report you’re interested in generating.
Images Affected by Vulnerability
View a list of images and their various artifacts that are affected by a vulnerability. By default, a couple optional filters are provided:
Filter
Description
Vulnerability Id
Vulnerability ID
Tag Current Only
If set to true, current tag mappings are evaluated. Otherwise, all historic tag mappings are evaluated
Policy Compliance History by Tag
Query your policy evaluation data using this report type. By default, this report was crafted with compliance history in mind. Quite a few optional filters are provided to include historic tag mappings and historic policy evaluations from any policy that is or was set to active. More info below:
Filter
Description
Registry Name
Name of the registry
Repository Name
Name of the repository
Tag Name
Name of the tag
Tag Current Only
If set to true, current tag mappings are evaluated. Otherwise, all historic tag mappings are evaluated
Policy Evaluation Latest Only
If set to true, only the most recent policy evaluation is processed. Otherwise, all historic policy evaluations are evaluated
Policy Active
If set to true, only the active policy at the time of this query is used. Otherwise, all historically active policies are also included. This attribute is ignored if a policy ID or digest is specified in the filter
Note that the default filters provided are optional.
Adding Optional Filters
Once a report type has been selected, an Optional Filters dropdown becomes available with items specific to that Query. Such as those listed above, any filters considered default to that report type are also shown.
You can remove any filters you don’t need by pressing the in their top right corner but as long as they’re empty/unset, they will be ignored at the time of report generation.
Generating a Report
After a report type has been selected, you immediately can Generate Report by clicking the button shown in the bottom left of the view.
By default, the Table format is selected but you can click the dropdown and modify the format for your report by selecting either JSON or CSV.
Table
A fast and easy way to browse your data, the table report retrieves paginated results and provides optional sorting by clicking on any column header. Each column is also resizable for your convenience. You can choose to fetch more or fetch all items although please note that depending on the size of your data, fetching all items may take a while.
Download Options
Download your report in JSON or CSV format. Various metadata such as the report type, any filters used when querying, and the timestamp of the report are included with your results. Please note that depending on the size of your data, the download may take a while.
7.5.3 - Report Manager
Overview
Use the Report Manager view to create custom queries, set a report to run on a schedule (or store the configuration for future use), and get notified when they’re executed in order to receive the insights you’re interested in for account-wide artifacts. The results are provided through a variety of formats - tabular, JSON, or CSV - and rely on data retrieved from the back-end Enterprise Reporting Service.
Note: Because the reporting data cycle is configurable, the results shown in this view may not precisely reflect actual analysis output at any given time.
For more information on how to modify this cycle or the Reporting Service in general, please refer to the Reporting Service documentation.
The following sections in this document describe templates, queries, scheduling reports, and viewing your results.
Report Manager
Templates
Templates define the filters and table field columns used by queries to generate report output. The templates provided by the sytem or stored by other users in your account can be used directly to create a new query or as the basis for crafting new templates.
System Templates
By default, the UI provides a set of system templates:
Images Failing Policy Evaluation
This template contains a customized set of filters and fields, and is based on “Policy Compliance History by Tag”.
Images With Critical Vulnerabilities
This template contains a customized set of filters and fields, and is based on “Images Affected by Vulnerability”.
Artifacts by Vulnerability
This templates contains all filters and fields by default.
Tags by Vulnerability
This templates contains all filters and fields by default.
Images Affected by Vulnerability
This templates contains all filters and fields by default.
Policy Compliance History by Tag
This templates contains all filters and fields by default.
Vulnerabilities by Kubernetes Namespace
This templates contains all filters and fields by default.
Vulnerabilities by Kubernetes Container
This templates contains all filters and fields by default.
Vulnerabilities by ECS Container
This templates contains all filters and fields by default.
Creating a Template
In order to define a template’s list of fields and filters, navigate to the Create a New Template section of the page, select a base configuration provided by the various System Templates listed above, and click Next to open a modal.
Provide a name for your new template, add an optional description, and modify any fields or filters to your liking.
The fields you choose control what data is shown in your results and are displayed from left to right within a report table. To optionally refine the result set returned, you can add or remove filter options, set a default value for each entry and specify if the filter is optional or required.
Note that templates must contain at least one field and one filter.
Once the template is configured to your satisfaction, click OK to save it as a Stored Template. Your new template is now available to hydrate a query or as a basis for future templates.
Editing a Template
To view or edit a template that has been stored previously, click its name under Stored Report Items on the right of the page. As with the creation of a template, the list of fields and filters can be customized to your preference.
When you’re done, click OK to save any new changes or Cancel to discard them.
Deleting a Template
To delete a template that you have configured previously, click the red “x” to the left of its name under Stored Report Items and click Yes to remove it. Note that once the template has been removed, you won’t be able to recover it.
Queries
Queries are based on a template’s configuration and can then be submitted to the back-end Enterprise Reporting Service on a reoccurring schedule to generate reports. These results can then be previewed in tabular form and downloaded in JSON or CSV format.
Creating a Query
To create a query, navigate to the Create a New Query section of the page, select a template configuration, and click Next to open a modal.
After you provide a unique name for the query and an optional description, click OK to save your new query. You will be automatically navigated to view it.
Editing a Query
To view or edit a query, click its name under Stored Report Items on the right of the page to be navigated to the Query View.
Within this view, you can edit its name and description, set a schedule to act as the base configuration for Scheduled Items, and view the various filters set by the template this query was based on.
To save any changes to the query, click Save Query or Save Query and Schedule Report.
Setting a Schedule
In order to set or modify a query’s schedule, click Add/Change Schedule to open a modal.
Reports can be generated daily, weekly, or monthly at a time of your choosing. This can be set according to your timezone or UTC. By default, the schedule is set for weekly on Mondays at 12PM your time.
When scheduling reports to be generated monthly, note that multiple days of the month can be selected and that certain days (the 31st, for example) may not trigger every month.
In the top-right corner of the modal, you can toggle the enabled state of the schedule which determines whether reports will be executed continuously on the timed interval you saved. Note that pressing OK modifies the schedule but does not save it to the query. Please click the Save Query or Save Query and Schedule Report to do so.
Deleting a Query
To delete a query, click the red “x” to the left of its name under Stored Report Items and click Yes to remove it. Note that every scheduled report associated with that query will also be removed and not be recoverable.
Scheduled Reports
Adding a Scheduled Item
Once you’ve crafted a query based on a system or custom template, supplied any filters to refine the results, and previewed the report generated to ensure it is to your satisfaction, you can add it to be scheduled by clicking Save Query and Schedule Report.
Any schedules created from this view will be listed at the bottom.
Editing a Scheduled Item
To edit a scheduled item, click on Tools within that entry’s Actions column and select Edit Scheduled Item to open a modal.
Here, you can modify the name, description, and schedule for that item. Click Yes to save any new changes or Cancel to discard them.
Deleting a Scheduled Item
To delete a scheduled item, click on Tools within that entry’s Actions column and select Delete Scheduled Item. Note that every report generated from that schedule will also be removed upon clicking Yes and will not be recoverable.
Viewing Results
Click View under a scheduled item’s Actions column to expand the row and view its list of associated reports sorted by most recent. Click View or Tools > View Results to navigate to that report’s results.
If you configured notifications to be sent when a report has been executed, you can navigate to the report’s results by clicking the link provided in its notification.
Downloading results
A preview of up to 1000 result items are shown in tabular form which provides optional sorting by clicking on any column header. If a report contains more than 1000 results, please download the data to view the full report. To do so, click Download to JSON or Download to CSV based on your preferred format.
Various metadata such as the report type, any filters used when querying, and the timestamp of the report are included with your results. Please note that depending on the size of your data, the download may take a while.
Configure Notifications
To be notified whenever a report has been generated, navigate to Events & Notifications > Manage Notifications. Once any previous notification configurations have loaded, add a new one from your preferred endpoint (Email, Slack, etc), and select the predefined event selector option for Scheduled Reports.
This includes the availability of a new result or any report execution failures.
Once you receive a notification, click on the link provided to automatically navigate to the UI to view the results for that report.
7.5.4 - Saved Reports
Overview
The Saved Reports tab in the Reports view is where you can view,
configure, download, or delete reports that have been saved for future use. Each report
entry may contain zero or more results, depending on whether the report has
been run or not.
Note: The Saved Reports tab will be the default tab selected in the
Reports view when you have one or more saved reports.
Viewing a Report
An example of the Saved Reports tab is shown below:
Clicking anywhere within the row other than on an active report title or on the
Actions button will expand it, displaying the executions for that report if
any are available. Clicking an active report title will take you to a view
displaying the latest execution for that report. An inactive report title
indicates that no results are yet available.
If a report has been scheduled but has no executions, the expanded row will look
like the following example:
Reports with one or more executions will look like the following example:
In the above example you can see a list of previously executed reports. Their
completion status is indicated by the green check mark. Reports that are still
in progress are indicated by a spinning icon. Reports that are queued for
execution are indicated by an hourglass icon. The reports shown here are all
complete, so they can be downloaded by clicking the Download Full Report button.
Incomplete, queued, or failed reports cannot be downloaded.
The initial view shows up to four reports, with any older items being viewable
by clicking the View More button. The View More button will disappear
when there are no more reports to show. In addition:
Clicking the Refresh List button will refresh the list of reports, including any executions that may have completed since the last time the list was refreshed. Clicking the Generate Now button will generate a new execution of the report.
Individual report items can be deleted by clicking the Delete button. If the
topmost report item is deleted, the link in the table row will correspond to the
next report item in the list (if any are available).
Note: Deleting all the execution entries for a report will not delete the
report itself. The report will still be available for future executions.
Tools Dropdown
Each report row has a Tools control that allows you to perform the following
actions:
Configure: Opens the report configuration popup, allowing you to change
the report name, description, and schedule
Generate Now: Generates a new execution of the report
Save as Template: Saves the report as a user template, allowing you to use
it as the basis for future reports
Delete: Removes the report and any associated executions. If all reports
are deleted, the page will transition to the New Reports tab and the Saved
Reports tab will be disabled.
7.5.5 - Templates
Overview
The Templates tab in the Reports view is where you can view and manage
report templates. Templates provide the basis for creating the reports executed
by the system and specify which filters are applied to the retrieved dataset and
how the returned data is shaped.
A number of system templates are provided with the application and all of these
and can be used as-is, or as a starting point for creating your own user templates.
Viewing Templates
An example of the System Templates view in the Templates tab is shown
below:
In this view you can see all the system templates provided by default, and their
associated descriptions. System templates cannot be deleted, but can be copied
and modified to create your own user templates.
An alternate way of creating a new user template is by clicking the Create New
Template button. You will be presented with a dialog that allows you to
select an existing system template as your starting point, or base your
composition on any of the custom templates created by you or other users:
Selecting a template from the provided dropdown will open the Create a New
Template dialog:
Within this dialog you can provide a unique name and optional description for
the new template. In addition, you can modify the filters available when composing
reports based on this template, and the columns that will be displayed in the
resulting report:
Filters: You can add or remove filters, set
default values, and specify if the filter is optional or required. Filters are
displayed from left to right when composing a report—you can change the display
order by clicking on a row hotspot and dragging the row item up or down
the list.
Columns: You can add or remove columns, change their display
order, or provide custom column names to be used when the data is presented in
the tabular form offered by comma-separated variable (CSV) file downloads.
Columns are displayed from left to right within a report table—you can change
the display order by clicking on a row hotspot and dragging the row item up or
down the list. Note that templates must contain at least one column.
Once you have configured the filters and columns, you can specify if the report
will be scoped to return results against the analysis data in either the current selected
account or from all accounts, and click OK. The new template will be added
to the list of available user templates.
Custom Templates
The custom templates view shows all user-defined templates present in
the current selected account. An example of the Custom Templates view is shown below:
Unlike system templates, custom templates can be edited or deleted in addition
to being copied. Clicking the Tools button for a custom template will
display the following options:
Note that any changes you make to templates in this view, or any new entries you
create, will be available to all users in the current selected account.
Anchore Enterprise includes the ability to read a user-supplied ‘hints’ file to allow users to add software artifacts to Anchore’s
analysis report. The hints file, if present, contains records that describe a software package’s characteristics explicitly,
and are then added to the software bill of materials (SBOM). For example, if the owner of a CI/CD container build process
knows that there are some
software packages installed explicitly in a container image, but Anchore’s regular analyzers fail to identify them, this mechanism
can be used to include that information in the image’s SBOM, exactly as if the packages were discovered normally.
Hints cannot be used to modify the findings of Anchore’s analyzer beyond adding new packages to the report. If a user specifies
a package in the hints file that is found by Anchore’s image analyzers, the hint is ignored and a warning message is logged
to notify the user of the conflict.
Once enabled, the analyzer services will look for a file with a specific name, location and format located within the container image - /anchore_hints.json.
The format of the file is illustrated using some examples, below.
OS Package Records
OS Packages are those that will represent packages installed using OS / Distro style package managers. Currently supported package types are rpm, dpkg, apkg
for RedHat, Debian, and Alpine flavored package managers respectively. Note that, for OS Packages, the name of the package is unique per SBOM, meaning
that only one package named ‘somepackage’ can exist in an image’s SBOM, and specifying a name in the hints file that conflicts with one with the same
name discovered by the Anchore analyzers will result in the record from the hints file taking precedence (override).
Minimum required values for a package record in anchore_hints.json
Non-OS / language package records are similar in form to the OS package records, but with some extra/different characteristics being supplied, namely
the location field. Since multiple non-os packages can be installed that have the same name, the location field is particularly important as it
is used to distinguish between package records that might otherwise be identical. Valid types for non-os packages are currently java, python, gem, npm, nuget, go, binary.
For the latest types that are available, see the anchorectl image content <someimage> output, which lists available types for any given deployment of Anchore Enterprise.
Minimum required values for a package record in anchore_hints.json
Using the above examples, a complete anchore_hints.json file, when discovered by Anchore Enterprise located in /anchore_hints.json inside any container image, is provided here:
With such a hints file in an image based for example on alpine:latest, the resulting image content would report these two package/version records
as part of the SBOM for the analyzed image, when viewed using anchorectl image content <image> -t os and anchorectl image content <image> -t gem
to view the musl and wicked package records, respectively.
Note about using the hints file feature
The hints file feature is disabled by default, and is meant to be used in very specific circumstances where a trusted entity is entrusted with creating
and installing, or removing an anchore_hints.json file from all containers being built. It is not meant to be enabled when the container image builds
are not explicitly controlled, as the entity that is building container images could override any SBOM entry that Anchore would normally discover, which
affects the vulnerability/policy status of an image. For this reason, the feature is disabled by default and must be explicitly enabled in configuration
only if appropriate for your use case.
7.6.2 - Corrections
When Anchore analyzes an image, it persists a Software Bill of Materials (SBOM) which will be submitted for periodic scanning for known vulnerabilities. During the scan, various attributes will be used from the SBOM package artifacts to match to the relevant vulnerability data. Depending on the ecosystem, the most important of these package attributes tend to be Package URL (purl) and/or Common Platform Enumeration (CPE). The Anchore analyzer attempts to generate a best effort guess of the CPE candidates for a given package as well as the purl based on the metadata that is available at the time of analysis (ex. for Java packages, the manifest, which contains multiple different version specifications among other metadata), but sometimes gets this wrong.
To facilitate the necessary corrections in these instances, Anchore provides the Corrections feature. Now, a user can provide a correction that will update a given package’s metadata so that attributes (including CPEs and Package URLs) can be corrected at the time that Anchore performs a vulnerability scan.
An example follows for a very common scenario in the java maven ecosystem where the official maven groupid and artifactid are not available in the metadata and the best guess that the Anchore analyzer surfaces for package url and CPEs is not in line with the vulnerability data, so a correction can be issued to align them.
Imagine an Anchore analysis results in the following package content:
There are several issues with this entry. The maven groupid and artifactid within the purl, the package name, and the CPE are all not in line with what is expected for proper vulnerability matching.
Using the above example, a user can add a correction as using anchorectl or via HTTP POST to the /corrections endpoint:
description: A description of the correction being added (for note taking purposes)
replace: a list of field name/value pairs to replace. For the “cpes” and “purl” field only, Anchore Enterprise can recognize a templated field via curly braces “{}”. Package JSON keys contained here will be replaced with their corresponding value from the package. For “cpe” if the templated field does not exist in the package, the corresponding cpe component will be replaced with *. For “purl” if the templated field doesn’t exist the purl replacement will be aborted and the purl will remain unchanged from the original value.
type: The type of correction being added. Currently only “package” is supported
match:
type: The type of package to match upon. Supported values are based on the type of content available to images being analyzed (ex. java, gem, python, npm, os, go, nuget)
field_matches: A list of name/value pairs based on which package metadata fields to match this correction upon
The schema of the fields to match can be found by outputting the direct JSON content for the given content type:
Note: if a new field is specified here, it will be added to the content output when the correction is matched. See below for additional functionality around CPEs and Package URL
To add the above JSON using anchorectl the following command can be used
Note: Don’t forget to replace the Image_sha256_ID with the image ID you’re trying to test.
Corrections may be updated and deleted via the API as well. Creation of a Correction generates a UUID that may be used to reference that Correction later.
Refer to the Enterprise Swagger spec for more details.
8 - Anchore Enforce - Compliance Management
What is a policy?
A policy is composed of a set of rules that are used to perform an evaluation
on a source repository or container image. These rules include—but are not
limited to—checks on security, known vulnerabilities, configuration file
contents, the presence of credentials, manifest changes, exposed ports, or any
user defined checks.
Policies can be deployed site wide, or customized to run against specific
sources, container images, or categories of application. For additional
information, refer to the Policy
concepts section.
Once a policy has been applied to a source repository or image container, it can
return one of two results:
“Passed” indicating that source or image complies with your policy.
“Failed” indicating that the source or image is non-compliant with your policy.
A policy includes the following elements:
Rule Sets
A policy is made up from a set of rules that are used to perform an evaluation on a source repository or container image. These policies can be deployed site wide or customized for specific source repositories, container images, or categories of applications. A policy may contain one or more named rule sets.
Policy rule checks are made up of gates and triggers. A gate is a set of policy checks against broad categories like vulnerabilities, secret scans, licenses, and so forth. It will include one or more triggers, which are checks specific to the gate category. For additional information, refer to the Policy Rules concepts section.
The policy additionally specifies the following action results:
STOP: Critical error that should stop the deployment by failing the policy evaluation.
WARN: Issue a warning.
GO: Okay to proceed.
The result of an evaluation will be based on the actions configured in your rule set.
If you are creating a policy rule for a source repository, only vulnerabilities checks are available.
Allowlists
An allowlist contains one or more exceptions that can be used during policy
evaluations, such as allowing a CVE to be excluded. A policy may contain multiple allowlists.
Mappings
A policy mapping defines which policies and allowlists should be used to
perform the policy evaluation of a given source repository or container image.
A policy may contain multiple mappings including wildcard mappings that
apply to multiple elements.
Allowed Images
An allowed images list defines one or more images that will always pass policy
evaluation regardless of any policy violations. Allowed images can be
specified by name, image ID, or image digest. A policy contains a
single list of allowed images.
Denied Images
A denied images list defines one or more images that will always fail policy
evaluation. Denied images can be specified by name, image ID, or image digest.
A policy contains a single list of denied images.
Listing Policies
The Policies tab contains a table
that lists the policies defined within an account.
Note: A lock icon next to the policy name indicates that the policy cannot
be deleted. Policy rules that are used by policy mappings in the policy (which
will be listed under the Mappings column entry within the Edit option) cannot be deleted
until they are removed from every associated mapping.
Policies can also be managed directly using the REST API or the anchorectl policy command. It is recommended that any policy configuration be handled via UI and not AnchoreCTL
# anchorectl policy list
✔ Fetched policies
┌────────────────┬──────────────────────────────────────┬────────┬──────────────────────┐
│ NAME │ POLICY ID │ ACTIVE │ UPDATED │
├────────────────┼──────────────────────────────────────┼────────┼──────────────────────┤
│ Default policy │ 2c53a13c-1765-11e8-82ef-23527761d060 │ true │ 2023-10-25T20:39:28Z │
│ devteam1policy │ da8208a2-c8ae-4cf2-a25b-a52b0cdcd789 │ false │ 2023-10-25T20:47:16Z │
└────────────────┴──────────────────────────────────────┴────────┴──────────────────────┘
** times are reported in UTC
Using the policy get command, summary or detailed information about a policy can be retrieved. The policy is referenced using its unique POLICY ID.
The Tools dropdown menu in the Actions column provides options to:
Edit the policy
Copy the policy
Download the policy as a JSON document
Delete the policy (if it is not being used by any policy mapping)
The Edit button provides options to view and edit:
Rule Sets
Allowlists
Mappings
Allowed/Denied Images
Policies with AnchoreCTL
Compliance Management and Policies are central to the concept of Anchore Enterprise, this section of the article provides information on how to check compliance of an image against various policies via the command-line using AnchoreCTL and then previewing the outcome. This can be useful for various CI/CD pipelines and automation.
Anchore Enterprise can store multiple policies for each account, but only one policy can be active at any point in time. All users within an account share the same set of policies. It is common to store historic policies to allow previous policies and evaluations to be inspected. The active policy is the one used for evaluation for notifications, incoming kubernetes webhooks (unless configured otherwise), and other automatic system functions, but a user may request evaluation of any policy stored in the system using its id.
Please find the AnchoreCTL commands for checking images against various policies over on the Testing Policies page.
8.1 - Policy Packs
Introduction
Secure is the default policy shipped with an Anchore Enterprise deployment. For other packs, you must have the correct license and subscription entitlement in order to be granted access.
Anchore Enterprise provides pre-built policy packs to scan for the following compliance frameworks:
Current FedRAMP policy pack version: Anchore FedRAMP v5 Checks v20250101
Please contact Anchore Customer Success to request the latest.
Introduction
FedRAMP (Federal Risk and Authorization Management Program) is a standardized approach for assessing, authorizing, and monitoring cloud service providers (CSPs) that provide service to federal agencies. Through a rigorous and comprehensive process, FedRAMP ensures that CSPs meet security standards by providing a baseline set of security controls in order to enhance the overall security for federal information systems.
Anchore’s FedRAMP policy validates whether container images scanned by Anchore Enterprise are compliant with the FedRAMP Vulnerability Scanning Requirements and also validates them against FedRAMP controls specified in NIST 800-53 Rev 5 and NIST 800-190.
Anchore’s FedRAMP policy only checks for specification requirements relevant to software supply chain security.
Anchore’s FedRAMP policy checks for the following specifications:
AC-6(10) ACCESS CONTROL: Prevent Non-Privileged Users from Executing Privileged Functions
CM-2(2), CM-3(1), CM-6 CONFIGURATION MANAGEMENT: Baseline Configuration | Configure Systems and Components for High-risk Areas
IA-05(7) IDENTIFICATION AND AUTHENTICATION: Authenticator Management | No Embedded Unencrypted Static Authenticators
RA-5, SI-02(2) RISK ASSESSMENT: Vulnerability Monitoring and Scanning
SC-5 SYSTEM AND COMMUNICATIONS PROTECTION: Denial-of-Service Protection
Enabling the FedRAMP Policy
If you are an Anchore Enterprise customer, you will receive an email, which includes a json file for the specific FedRAMP policy that comes with your service.
Access to this policy pack requires an Anchore Enterprise add-on entitlement.
Navigate to the Policies tab in Anchore Enterprise and click on the ‘Import Policy’.
Drag and drop, or paste the .json file to import the policy into Anchore Enterprise.
Navigate to the Image tab in Anchore Enterprise and you will now be able to evaluate an image with the FedRAMP policy.
Or run the following command using AnchoreCTL
As an example, we will add a centos image and evaluate it using the FedRAMP policy. please give it some time for Anchore to analyze the image when added
Some of the control specifications need configuration based on the user’s environment. The control specifications are represented by ‘Rule Sets’ in Anchore Enterprise. Navigate to the Policies tab and click on the ‘Edit’ under ‘Actions’.
It is recommended all configuration changes to rule sets be done in the Anchore Enterprise UI.
You will be able to view all the FedRAMP specifications Anchore analyzes for. Under each Rule Set, please edit the ones that require configuration.
As an example, a user may need to change the port configuration for CM-7(1) CONFIGURATION MANAGEMENT, which checks for network port exposures.
Make sure to go through each of the Rule Sets to configure all applicable specifications. Save and close.
The following rule sets MUST be configured before using the FedRAMP policy:
CM-2(2), CM-3(1), CM-6 CONFIGURATION MANAGEMENT: Baseline Configuration | Configure Systems and Components for High-risk Areas
Current NIST 800-53 and 800-190 policy pack versions: Anchore NIST 800-53 v20250101 and Anchore NIST 800-190 vv20250101
Please contact Anchore Customer Success to request the latest.
Introduction
The National Institute of Standards and Technology (NIST) is a non-regulatory agency of the U.S Commerce Department that provides industry standards and guidelines to help federal agencies meet requirements set by the Federal Information Security Management Act (FISMA).
Anchore Enterprise scans for the following NIST policies:
NIST 800-53
NIST 800-190
Anchore also covers NIST 800-218 (SSDF) with the SSDF Attestation Form Guide and Evidence document, which includes evidence-based artifacts for an official SSDF Attestation Form submission. To learn more, click here.
NIST 800-53 provides guidelines to ensure the security of information systems used within the federal government. In order to maintain the integrity, confidentiality and security of federal information systems, NIST 800-53 provides a catalogue of controls in order for federal agencies to meet industry standard and compliance.
Anchore checks for the following control specifications in the NIST 800-53 policy:
AC-6(10) Container Image Must Have Permissions Removed from Executables that Allow a User to Execute Software at Higher Privileges
CM-6(b) Confidential Data Checks
CM-7(1b) Network Port Exposure Checks
CM-7(a) Container Image Build Content Checks
IA-5(2a) Base Image Checks
IA-5(7) Embedded Credentials
RA-5 Software Vulnerability Checks
SC-5 Image Checks
SC-8(2) Base Image Checks
SI-2(6) Image Software Update/Layer Checks
NIST 800-190 provides guidelines to ensure the security of application containers used within the federal government. In order to maintain the integrity, confidentiality and security of federal application containers, NIST 800-190 provides a catalogue of controls in order for federal agencies to meet industry standard and compliance.
Anchore checks for the following control specifications in the NIST 800-190 policy:
3.1.1 Image Vulnerabilities
3.1.2 Image Configuration Defects
3.1.3 Embedded Malware
3.1.4 Embedded Clear Text Secrets
Enabling the NIST Policy
For this walkthrough, we will be using the NIST 800-53 policy for demonstration.
If you are an Anchore Enterprise customer, you will receive an email, which includes a json file for the NIST 800-53 policy that comes with your service.
Access to this policy pack requires an Anchore Enterprise add-on entitlement.
Navigate to the Policies tab in Anchore Enterprise and click on the ‘Import Policy’.
Drag and drop, or paste the .json file to import the policy into Anchore Enterprise.
Navigate to the Image tab in Anchore Enterprise and you will now be able to evaluate an image with the NIST 800-53 policy.
Or run the following command using AnchoreCTL
As an example, we will add a centos image and evaluate it using the NIST 800-53 policy. please give it some time for Anchore to analyze the image when added
Some of the control specifications need configuration based on the user’s environment. The control specifications are represented by ‘Rule Sets’ in Anchore Enterprise. Navigate to the Policies tab and click on the ‘Edit’ under ‘Actions’.
It is recommended all configuration changes to rule sets be done in the Anchore Enterprise UI.
You will be able to view all the NIST 800-53 specifications Anchore analyzes for.
As an example, a user may need to change the port configuration for CM-7(1b): Network Port Exposure Checks, which checks for network port exposures.
Make sure to go through each of the Rule Sets to configure all applicable specifications. Save and close.
The following rule sets MUST be configured before using the NIST 800-53 policy:
CM-6(b) Confidential Data Checks
CM-7(1b) Network Port Exposure Checks
CM-7(a) Container Image Build Content Checks
8.1.2.1 - SSDF
In February 2021, The National Institute of Standards and Technology (NIST) created NIST SP 800-218, otherwise known as Secure Software Development Framework (SSDF), in response to a new executive order mandated by the federal government.
SSDF provides a comprehensive set of guidelines aimed at integrating security into the software development lifecycle, thereby enhancing the security posture of software products from inception to deployment. To verify and validate that organizations meet the controls needed to be SSDF compliant, CISA created an official SSDF Attestation Form that allows organizations to verify and attest that they adhere to the SSDF guidelines and comply with a subset of security controls.
Purpose
Anchore provides a downloadable document that serves as an evidence attachment for the SSDF Attestation Form. The document makes the assumption Anchore Enterprise is used in the organization’s environment and is configured to scan the software that is in scope for the SSDF Attestation Form.
The SSDF Attestation Form consists of three sections that must be completed. Sections I and II cover organization-specific details, whereas Section III lists requirements against various security controls. The intent of this document is to provide guidance for first time applicants and help organizations save time collecting evidence required for Section III of the SSDF Attestation Form.
Download
Detailed instructions to complete the form can be found on page 1. This document uses the official SSDF Attestation Form as its base template. Once completed, the document can be directly attached to an SSDF Attestation Form submission. Click below to obtain the form:
Additional Resources
SSDF Attestation 101: A practical guide for Software Producers - Download eBook
Using the Common Form for SSDF Attestation: What Software Producers Need to Know - Read blog
Automate NIST compliance and SSDF attestation with Anchore Enterprise - Learn more
Please contact Anchore Customer Success to request the latest
Introduction
The Center for Internet Security (CIS) provides prescriptive configuration recommendations for a variety of software vendors. Anchore’s CIS policy pack is based off of the CIS Docker 1.7 Benchmark and validates a subset of security and compliance checks against container images deployed on Docker version 1.7.
Anchore checks for the following control specifications in the CIS policy:
4.1 Ensure that a user for the container has been created
4.2 Ensure that containers use only trusted base
4.3 Ensure that unnecessary packages are not installed in the container
4.4 Ensure images are scanned and rebuilt to include security patches
4.6 Ensure that HEALTHCHECK instructions have been added to container images
4.7 Ensure update instructions are not used alone in Dockerfiles
4.8 Ensure setuid and setgid permissions are removed
4.9 Ensure that COPY is used instead of ADD in Dockerfiles
4.10 Ensure secrets are not stored in Dockerfiles
4.11 Ensure only verified packages are installed
5.8 Ensure privileged ports are not mapped within containers
Enabling the CIS Policy
Note
For this walkthrough, we will be using the IronBank policy for demonstration. Replace this policy file and it’s content with your CIS policy pack.
If you are an Anchore Enterprise customer, you will receive an email, which includes a json file for the IronBank policy that comes with your service.
Access to this policy pack requires an Anchore Enterprise add-on entitlement.
Navigate to the Policies tab in Anchore Enterprise and click on the ‘Import Policy’.
Drag and drop, or paste the .json file to import the policy into Anchore Enterprise.
Navigate to the Image tab in Anchore Enterprise and you will now be able to evaluate an image with the IronBank policy.
Or run the following command using AnchoreCTL
As an example, we will add a centos image and evaluate it using the IronBank policy. please give it some time for Anchore to analyze the image when added
To apply the active IronBank policy and get a simple pass/fail check:
#anchorectl image check -f docker.io/centos:latest
✔ Evaluated against policy [failed] docker.io/centos:latest
Tag: docker.io/centos:latest
Digest: sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc
Policy ID: 5-DoD-Iron-Bank-Docker
Last Evaluation: 2024-05-03T22:08:52Z
Evaluation: fail
Final Action: stop
Reason: policy_evaluation
Configuring Rule Sets for the CIS Policy
Some of the control specifications need configuration based on the user’s environment. The control specifications are represented by ‘Rule Sets’ in Anchore Enterprise. Navigate to the Policies tab and click on the ‘Edit’ under ‘Actions’.
It is recommended all configuration changes to rule sets be done in the Anchore Enterprise UI.
The following rule sets MUST be configured before using the CIS policy:
4.2 Ensure that containers use only trusted base
4.3 Ensure that unnecessary packages are not installed in the container
5.8 Ensure privileged ports are not mapped within containers
8.1.4 - DoD
Throughout this guide, we break down the deployment and configuration of the DoD policy with the following sections:
Current IronBank policy pack version: Anchore DoD Iron Bank v20250101 Current DISA policy pack version: Anchore DISA Image Creation and Hardening Guide v20250101
Please contact Anchore customer success to request the latest.
Introduction
Anchore Enterprise scans for the following DoD policies:
DISA Image Creation and Deployment Guide
IronBank
Being part of the Department of Defense (DoD), Defense Information Systems Administration (DISA) is the agency that provides IT and communications support to both the US government and federal organizations. The DISA Image Creation and Deployment Guide Policy provides security and compliance checks that align with specific NIST 800-53 and NIST 800-190 security controls and requirements as described in the DoD Container Image Creation and Deployment Guide.
Anchore checks for the following control specifications in the DISA policy:
AC6(10) Container Image Must Have Permissions Removed from Executables that Allow a User to Execute Software at Higher Privileges
CM-6(b) Confidential Data Checks
CM-7(1b) Network Port Exposure Checks
CM-7(a) Container Image Build Content Checks
IA-5(2a) Base Image Checks
IA-5(7) Embedded Credentials
RA-5 Software Vulnerability Checks
SC-5 Image Checks
SC-8(2) Base Image Checks
SI-2(6) Image Software Update/Layer Checks
The DoD IronBank policy validates images against DoD security and compliance requirements in alignment with U.S. Air Force security standards at Platform One and IronBank. The IronBank policy has been written in accordance to the following DoD documentation.
Dockerfile Checks
User Checks
File Checks
Istio Checks
Software Checks
Transfer Protocol Checks
Node.js Checks
Etcd Checks
Snort Checks
Jenkins Checks
Grafana Checks
UBI7 Checks
Chef Checks
Sonarqube Checks
Prometheus Checks
Postgres Checks
Nginx Checks
OpenJDK Checks
Twistlock Checks
Keycloak Checks
Fluentd Checks
Elasticsearch Checks
Kibana Checks
Redis Checks
Apache HTTP Checks
Apache Tomcat Checks
Enabling the DoD Policy
For this walkthrough, we will be using the IronBank policy for demonstration.
If you are an Anchore Enterprise customer, you will receive the json file during onboarding or via email distribution.
Access to this policy pack requires an Anchore Enterprise add-on entitlement.
Navigate to the Policies tab in Anchore Enterprise and click on the ‘Import Policy’.
Drag and drop, or paste the .json file to import the policy into Anchore Enterprise.
Navigate to the Image tab in Anchore Enterprise and you will now be able to evaluate an image with the IronBank policy.
Or run the following command using AnchoreCTL
As an example, we will add a centos image and evaluate it using the IronBank policy. please give it some time for Anchore to analyze the image when added
To apply the active IronBank policy and get a simple pass/fail check:
#anchorectl image check -f docker.io/centos:latest
✔ Evaluated against policy [failed] docker.io/centos:latest
Tag: docker.io/centos:latest
Digest: sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc
Policy ID: 5-DoD-Iron-Bank-Docker
Last Evaluation: 2024-05-03T22:08:52Z
Evaluation: fail
Final Action: stop
Reason: policy_evaluation
Configuring Rule Sets for the DoD Policy
Some of the control specifications need configuration based on the user’s environment. The control specifications are represented by ‘Rule Sets’ in Anchore Enterprise. Navigate to the Policies tab and click on the ‘Edit’ under ‘Actions’.
It is recommended all configuration changes to rule sets be done in the Anchore Enterprise UI.
The IronBank policy does not need any configuration changes for the Rule Sets. However, the DISA policy will need configuration changes for certain specifications.
As an example, a user may need to change the port configuration for CM-7(1b): Network Port Exposure Checks, which checks for network port exposures.
Make sure to go through each of the Rule Sets to configure all applicable specifications. Save and close.
The following rule sets MUST be configured before using the DISA policy pack:
CM-6(b) Confidential Data Checks
CM-7(1b) Network Port Exposure Checks
CM-7(a) Container Image Build Content Checks
8.1.5 - Secure
The default Secure policy pack comes included (and enabled) in every fresh deployment of Anchore Enterprise.
Current Secure policy pack version: Anchore Enterprise - Secure v20250101
Introduction
Anchore’s default Secure policy pack includes standard vulnerability and system-level checks and can be used against an image SBOM for policy compliance based on the policy actions configured in each rule. All the rules that are configured by default can (and should) be adjusted acccording to an organization’s security policy.
Anchore checks for the following control specifications in the Secure policy:
Feed Data not available Fail if feed data is unavailable
Outdated Feed Data Warn if feed data is more than 2 days old. This value can be adjusted based on internal requirements (Available for both Container and Source)
Warn on low and moderate with fixes Warn when there are low and medium severity vulnerabilities found that also have a fix present (Available for both Container and Source)
Warn on week old Important Warn when there are important severity vulnerabilities found that are more than a week old (Available for both Container and Source) “Important” indicates the severity of a vulnerability. By default, it is set to “High” but this can be configured in the policy rule set
Fail on criticals Fail when there are critical severity vulnerabilities present (Available for both Container and Source)
8.2 - Managing Policies
Policies
The Policy tab displays a list of policies that are loaded in the
system. Each policy has a unique name, unique ID (UUID), and an optional
description.
Anchore Enterprise supports multiple policies. AnchoreCTL, API, and CI/CD
plugins support specifying a policy when requesting an source repository or
container image evaluation. For example, the development team may use a
different set of policy checks than the operations team. In this case, the
development team would specify their policy ID as part of their policy
evaluation request.
If no policy ID is specified, then Anchore Enterprise will use the active policy
which can be considered as the default policy. Only one policy can be set as
default/active at any time. This policy will be highlighted with a green ribbon.
Note: Policies which are not marked as Active can still be explicitly
requested as part of a policy evaluation.
If multiple users are accessing Policies, or if policies are
being added or removed through the API or AnchoreCTL, then you may update the list of policies by clicking on the Refresh the Policy List button.
The following command can be run to list policies using AnchoreCTL:
# anchorectl policy list
Adding and Editing Policies
You can add a new policy with custom rule rests within the Policies tab.
Click Create New Policy and provide a name and description.
Click Add New Rule Set and select Source Repository if you want the new policy to apply to a source, or select Container Image to have the policy apply to an image.
You can configure each rule set under the Edit option. Start by selecting an item from the Gate dropdown list, where each item represents a category of policy checks.
Note: If you are creating a policy rule for a source repository, only
vulnerabilities are available.
After selecting a gate item, hover over the (i) indicator next to
Gate to see additional descriptive details about the gate you have
selected.
Click the Triggers drop down and select a specific check that you want
associated with this item, such as package, vulnerability data unavailable,
and so on. Triggers may have parameters, some of which may be optional.
If any optional parameters are associated with the trigger you select, these
will also be displayed in an additional field where they can be added or
removed. Optional parameters are described in more detail in the next
section.
Select an action to apply to the policy rule. Choose STOP, WARN, or
GO. The action options are only displayed once all required parameters
have been provided, or if no mandatory parameters are required. Once an
action has been selected, the rule is added to the main list of rules
contained in the policy.
Note: Adding a policy will not automatically make it active. You will need to activate the policy using the activate command.
The policy activate command can be used to activate a policy. The policy is referenced using its unique POLICY ID which can be retrieved using the policy list command.
If you already have a policy that you would like to use as a base for
another policy, you can make a copy of it, give it a new name, and then work
with the policies, mappings, allowlists, and allowed or denied images.
From the Tools list, select Copy Policy.
Enter a unique name for the copy of the policy.
Optional: You can add a description to explain the new policy. This is
recommended.
Click OK to copy the policy.
Download a Policy
From the Tools menu, select Download to JSON.
The JSON file is downloaded just like any other downloaded file to your
computer. Save the downloaded JSON file to your location of choice.
Note: Use the following command to download a policy using AnchoreCTL.
The policy must be referenced by its UUID. For example:
# anchorectl policy get 4c1627b0-3cd7-4d0f-97da-00be5aa835f4 --detail > policy.json
Delete a Policy
If you no longer use a policy, you can delete it. An active (default)
policy cannot be deleted. To delete the active policy first you must mark
another policy as active.
From the Tools menu, select Delete Policy.
Click Yes to confirm that you want to delete the policy.
*Warning: Once the policy is deleted, you cannot recover it.
Note: Use the following command to delete a policy using AnchoreCTL.
The policy must be referenced by its UUID. For example:
The following example shows a sophisticated policy check. The metadata gate has
a single trigger that allows checks to be performed against various attributes
of an image, including image size, architecture, and operating system
distribution:
The Attribute parameter drop-down includes a number of attributes taken from
image metadata, including the operating system distribution, number of layers,
and architecture of the image (AMD64, ARM, and so forth).
Once an attribute has been selected, the Check dropdown is used to create a
comparison expression.
The type of comparison varies based on the attribute. For example the numeric
comparison operators such as >, <, >= would be relevant for numeric field
such as size, while other operators such as not in may be useful for querying
data field such as distro.
In this example, by entering rhel centos oracle in the Value field, our
rule will check that the distro (that is, the operating system) under analysis
is not RHEL, Centos, or Oracle.
Optional Parameters
If a trigger has optional parameters, they will be automatically displayed in
the policy editor, and an editable field next to the Triggers drop-down will
show all the current selections.
You can remove unneeded optional parameters by clicking the X button
associated with each entry in the Optional Parameters list, or by clicking
the X button within each associated parameter block.
If an optional parameter is removed, it can be reapplied to the rule by clicking
the Optional Parameters field and selecting it from the resulting dropdown
list.
8.3 - Policy Gates
In this section of the document, we list and describe the current gates (and related triggers and parameters) that are supported within Anchore policy.
Getting Started
Before diving into the specifics of Policy Rule Sets and Gates, navigate to the Policies tab in order to create a new Policy.
Once a Policy has been created, you can start creating Rule Sets that define the Policy. When adding a new Rule Set, you will be prompted to select either “Source Repository” or “Container Images” that will define the source type of the Rule Set.
Note Currently, only the Vulnerabilities Gate and the following Triggers are available for Source Repository Rule Sets: - Denylist - Package - Stale Feed Data
Components of a Policy Rule Set
A gate is a collection of checks that are logically grouped to provide a broader context for policy evaluations. It is the first step a user must set when creating a rule.
Once a gate has been selected, a list of associated triggers for the selecteed gate is provided. A trigger defines a specific condition to check within the context of the gate.
Once a trigger has been selected, a list of associated parameters are provided to customize the matched behavior of the rule. Please note that a trigger may contain both a required and optional paramater. Required paramaters must be configured in order to save a rule.
Finally, the last step in the process is to configure the action for every matched instance of a trigger. The available actions are “STOP”, “WARN”, and “GO”.
Note Please click here for more detailed information on the architectural framework of a policy rule set.
The final policy evaluation against an image SBOM will result in a failure if and only if at least one rule within any rule set in the active policy has been triggered with a “STOP” action.
Rule actions are set per rule and cannot interfere with other rules in the same policy. For example, if we create a policy with the same identical rule but with different actions (STOP and WARN), each rule will be evaluated independently resulting in a duplicate finding with the same trigger ID.
Note Please click here to learn more about Anchore’s policies.
8.3.1 - Gate: ancestry
Introduction
The “ancestry” gate gives users the ability to construct policy rules against an image’s ancestry, specifically the base and ancestor images. This gate becomes useful when a user needs to quickly identify if an image SBOM is not part of an organization’s approved set of base and/or ancestor images.
Base images is referred to the image that a given image was built from. It serves as a template for developers to create a standardized environment on top of which they can build their custom images (often referred to as a “golden” image).
Ancestor images is referred to the chain of images built from other images.
Note To understand the concept of base and ancestor images more, please click here.
Example Use-case
Scenario 1
Goal: Fail a policy evaluation if an image is not part of a list of approved base images.
Example rule set configuration in Anchore Enterprise
Gate: ancestry Trigger: allowed base image digest Required Parameters: base digest = “SHA256:abcdef123456” Recommendation (Optional): The image is not derived from an approved base image. Remediation required. Action: STOP
Reference: ancestry
Trigger Name
Description
Parameter
Description
Example
allowed_base_image_digest
Checks to see if base image is approved
base_digest
List of approved base image digests.
sha256:123abc
allowed_base_image_tag
Checks to see if base image is approved
base_tag
List of approved base image tags.
docker.io/nginx:latest
denylist_ancestor_image_digest
Triggers if any of the ancestor images have the provided image digest(s)
ancestor_digest
List of ancestor image digests to check for. Accepts comma separated list of digests.
sha256:123abc
denylist_ancestor_image_tag
Triggers if any of the ancestor images have the provided image tag(s)
ancestor_tag
List of denied image tags to check the ancestry for. Accepts comma separated list of tags.
docker.io/nginx:latest
no_ancestors_analyzed
Checks to see if the image has a known ancestor
8.3.2 - Gate: distro
Introduction
The “distro” gate is solely intended to deny an image that is running on a specific distro. This is especially useful if a user wants to create a rule that can quickly discover any image SBOMs containing a specific version of a distro that is denied in their organization.
Example Use-case
Scenario 1
Goal: Create a rule that results in a STOP action for images that are running below Debian version 9.
Example rule set configuration in Anchore Enterprise
Gate: distro Trigger: deny Required Parameters: distro = “debian”, version = “9”, check = “<”
Recommendations (optional): “Image is running on an old version of Debian. Update required.” Action: STOP
Reference: distro
Trigger Name
Description
Parameter
Description
Example
deny
Triggers if the image distro and version match the criteria
distro
Name of the distribution to match
debian
deny
Triggers if the image distro and version match the criteria
version
Version of distribution to compare against
9
deny
Triggers if the image distro and version match the criteria
check
The comparison to use in the evaluation
<
8.3.3 - Gate: dockerfile
Introduction
This article reviews the “dockerfile” gate and its triggers. The dockerfile gate allows users to perform checks on the content of the dockerfile or docker history for an image and make policy actions based on the construction of an image, not just its content. This is particularly useful for enforcing best practices or metadata inclusion (e.g. labels) on images.
Anchore is either given a dockerfile or infers one from the docker image layer history. There are implications to what data is available and what it means depending on these differing sources, so first, we’ll cover the input data for the gate and how it impacts the triggers and parameters used.
The “dockerfile”
The data that this gate operates on can come from two different sources:
The actual dockerfile used to build an image, as provided by the user at the time of running anchorectl image add <img ref> --dockerfile <filename> or the corresponding API call to: POST /images?dockerfile=
The history from layers as encoded in the image itself (see docker history <img> for this output)
All images have data from history available, but data from the actual dockerfile is only available when a user provides it. This also means that any images analyzed by the tag watcher functionality will not have an actual dockerfile.
The FROM line
In the actual dockerfile, the FROM instruction is preserved and available as used to build the image, however in the history data, the FROM line will always be the very first FROM instruction used to build the image and all of its dependent based image. Thus, for most images, the value in the history will be omitted and Anchore will automatically infer a FROM scratch line, which is logically inserted for this gate if the dockerfile/history does not contain an explicit FROM entry.
For example, using the docker.io/jenkins/jenkins image:
FROM openjdk:8-jdk-stretch
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
ARG JENKINS_HOME=/var/jenkins_home
ENV JENKINS_HOME $JENKINS_HOME
ENV JENKINS_SLAVE_AGENT_PORT ${agent_port}
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN mkdir -p $JENKINS_HOME \
&& chown ${uid}:${gid} $JENKINS_HOME \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME $JENKINS_HOME
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Use tini as subreaper in Docker container to adopt zombie processes
ARG TINI_VERSION=v0.16.1
COPY tini_pub.gpg ${JENKINS_HOME}/tini_pub.gpg
RUN curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture) -o /sbin/tini \
&& curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture).asc -o /sbin/tini.asc \
&& gpg --no-tty --import ${JENKINS_HOME}/tini_pub.gpg \
&& gpg --verify /sbin/tini.asc \
&& rm -rf /sbin/tini.asc /root/.gnupg \
&& chmod +x /sbin/tini
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.121.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=5bb075b81a3929ceada4e960049e37df5f15a1e3cfc9dc24d749858e70b48919
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
# could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha256sum -c -
ENV JENKINS_UC https://updates.jenkins.io
ENV JENKINS_UC_EXPERIMENTAL=https://updates.jenkins.io/experimental
ENV JENKINS_INCREMENTALS_REPO_MIRROR=https://repo.jenkins-ci.org/incrementals
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
USER ${user}
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
COPY tini-shim.sh /bin/tini
ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/jenkins.sh"]
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
COPY install-plugins.sh /usr/local/bin/install-plugins.sh
Anchore will detect the history/dockerfile as this, if not explicitly provided (note order is reversed from docker history output, so it reads in same order as actual dockerfile):
NOTE: Anchore processes the leading /bin/sh commands, so you do not have to include those in any trigger param config if using the docker history output.
The actual_dockerfile_only Parameter
The actual vs history impacts the semantics of the dockerfile gate’s triggers. To allow explicit control of the differences, most triggers in this gate includes a parameter: actual_dockerfile_only that if set to true or false will ensure the trigger check is only done on the source of data specified. If actual_dockerfile_only = true, then the trigger will evaluate only if an actual dockerfile is available for the image and will skip evaluation if not. If actual_dockerfile_only is false or omitted, then the trigger will run on the actual dockerfile if available, or the history data if the dockerfile was not provided.
Differences in data between Docker History and actual Dockerfile
With Actual Dockerfile:
FROM line is preserved, so the parent tag of the image is easily available
Instruction checks are all against instructions created during the build for that exact image, not any parent images
When the actual_dockerfile_only parameter is set to true, all instructions from the parent image are ignored in policy processing. This may have some unexpected consequences depending on how your images are structured and layered (e.g. golden base images that establish common patterns of volumes, labels, healthchecks)
COPY/ADD instructions will maintain the actual values used
Multistage-builds in that specific dockerfile will be visible with multiple FROM lines in the output
With Docker History data, when no dockerfile is provided:
FROM line is not accurate, and will nearly always default to ‘FROM scratch’
Instructions are processed from all layers in the image
COPY and ADD instructions are transformed into SHAs rather than the actual file path/name used at build-time
Multi-stage builds are not tracked with multiple FROM lines, only the copy operations between the phases
Trigger: instruction
This trigger evaluates instructions found in the “dockerfile”.
Parameters
actual_dockerfile_only (optional): See above
instruction: The dockerfile instruction to check against. One of:
ADD
ARG
COPY
CMD
ENTRYPOINT
ENV
EXPOSE
FROM
HEALTHCHECK
LABEL
MAINTAINER
ONBUILD
USER
RUN
SHELL
STOPSIGNAL
VOLUME
WORKDIR
check: The comparison/evaluation to perform. One of: =, != , exists, not_exists, like, not_like, in, not_in.
value (optional): A string value to compare against, if applicable.
Examples
Ensure an image has a HEALTHCHECK defined in the image (warn if not found).
This trigger processes all USER directives in the dockerfile or history to determine which user will be used to run the container by default (assuming no user is set explicitly at runtime). The detected value is then subject to a allowlist or denylist filter depending on the configured parameters. Typically, this is used for denylisting the root user.
Parameters
actual_dockerfile_only (optional): See above
users: A string with a comma delimited list of username to check for.
type: The type of check to perform. One of: ‘denylist’ or ‘allowlist’. This determines how the value of the ‘users’ parameter is interpreted.
This trigger processes the set of EXPOSE directives in the dockerfile/history to determine the set of ports that are defined to be exposed (since it can span multiple directives). It performs checks on that set to denylist/allowlist them based on parameter settings.
Parameters
actual_dockerfile_only (optional): See above
ports: String of comma delimited port numbers to be checked.
type: The type of check to perform. One of: ‘denylist’ or ‘allowlist’. This determines how the value of the ‘users’ parameter is interpreted.
Examples
Allow only ports 80 and 443. Trigger will fire on any port defined to be exposed that is not 80 or 443.
This trigger allows checks on the way the image was added, firing if the dockerfile was not explicitly provided at analysis time. This is useful in identifying and qualifying other trigger matches.
Parameters
None
Examples
Raise a warning if no dockerfile was provided at analysis time .
Goal: Create a rule that results in a STOP action for username “root” found in an image SBOM’s dockerfile “USER” line.
Example rule set configuration in Anchore Enterprise
Gate: dockerfile Trigger: effective_user Required Parameters: users = “root”, type = “denylist” Recommendations (optional): “The username “root” is found in USER line. Fix required.” Action: STOP
Scenario 2
Goal: Create a rule that results in a WARN action for usernames “nginx” or “jenkins” not found in an image SBOM’s dockerfile “USER” line.
Example rule set configuration in Anchore Enterprise
Triggers if any directives in the list are found to match the described condition in the dockerfile.
instruction
The Dockerfile instruction to check.
from
instruction
Triggers if any directives in the list are found to match the described condition in the dockerfile.
check
The type of check to perform.
=
instruction
Triggers if any directives in the list are found to match the described condition in the dockerfile.
value
The value to check the dockerfile instruction against.
scratch
instruction
Triggers if any directives in the list are found to match the described condition in the dockerfile.
actual_dockerfile_only
Only evaluate against a user-provided dockerfile, skip evaluation on inferred/guessed dockerfiles. Default is False.
true
effective_user
Checks if the effective user matches the provided user names, either as a allowlist or blocklist depending on the type parameter setting.
users
User names to check against as the effective user (last user entry) in the images history.
root,docker
effective_user
Checks if the effective user matches the provided user names, either as a allowlist or blocklist depending on the type parameter setting.
type
How to treat the provided user names.
denylist
exposed_ports
Evaluates the set of ports exposed. Allows configuring allowlist or blocklist behavior. If type=allowlist, then any ports found exposed that are not in the list will cause the trigger to fire. If type=denylist, then any ports exposed that are in the list will cause the trigger to fire.
ports
List of port numbers.
80,8080,8088
exposed_ports
Evaluates the set of ports exposed. Allows configuring allowlist or blocklist behavior. If type=allowlist, then any ports found exposed that are not in the list will cause the trigger to fire. If type=denylist, then any ports exposed that are in the list will cause the trigger to fire.
type
Whether to use port list as a allowlist or denylist.
denylist
exposed_ports
Evaluates the set of ports exposed. Allows configuring allowlist or blocklist behavior. If type=allowlist, then any ports found exposed that are not in the list will cause the trigger to fire. If type=denylist, then any ports exposed that are in the list will cause the trigger to fire.
actual_dockerfile_only
Only evaluate against a user-provided dockerfile, skip evaluation on inferred/guessed dockerfiles. Default is False.
true
no_dockerfile_provided
Triggers if anchore analysis was performed without supplying the actual image Dockerfile.
8.3.4 - Gate: files
Introduction
The “files” gate performs checks against the files in an analyzed image SBOM and is useful when users need to create policies that trigger against any matched file content, names and/or attributes.
Note The “files” gate differs from the “retrieved_files” gate. The “files” gate searches against the files present in an image SBOM whereas the “retrieved_files” gate utilizes Anchore’s cataloger capability and checks against files that are provided and stored by the user before analysis.
Example Use-cases
Scenario 1
Goal: Create a rule that results in a STOP action for any file name that contains “.pem”, which may include information such as the public certificate or even an entire certificate chain (public key, private key, and root certificates) of an image SBOM.
Example rule set configuration in Anchore Enterprise
Gate: files Trigger: name match Required Parameters: regex = “.*\.pem” Recommendations (optional): “Filename with “.pem” found - Remediation required.” Action: STOP
Scenario 2
Goal: Create a rule that results in a STOP action for any file that matches against regex string “.*password.*” in an image SBOM.
Note In order to use this gate, the analyzer_config.yaml file for your Anchore deployment must have specific regex strings configured under the content_search section as the rule will only check against regex strings that appear in this list. Please use the optional parameter “regex name” if you want to specify a single string for your policy rule. If this paramater is not configured, then every regex string stored in the content_search section in the analyzer_config.yaml will be checked against.
Example rule set configuration in Anchore Enterprise
Gate: files Trigger: content regex match Optional Parameters: regex name = “ABC” Recommendations (optional): “Regex string “.*password.*” found in file. Fix required.” Action: STOP
analyzer_config.yaml file
Reference: files
Trigger Name
Description
Parameter
Description
Example
content_regex_match
Triggers for each file where the content search analyzer has found a match using configured regexes in the analyzer_config.yaml “content_search” section. If the parameter is set, the trigger will only fire for files that matched the named regex. Refer to your analyzer_config.yaml for the regex values.
regex_name
Regex string that also appears in the FILECHECK_CONTENTMATCH analyzer parameter in analyzer configuration, to limit the check to. If set, will only fire trigger when the specific named regex was found in a file.
.password.
name_match
Triggers if a file exists in the container that has a filename that matches the provided regex. This does have a performance impact on policy evaluation.
regex
Regex to apply to file names for match.
.*.pem
attribute_match
Triggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.
filename
Filename to check against provided checksum.
/etc/passwd
attribute_match
Triggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.
checksum_algorithm
Checksum algorithm
sha256
attribute_match
Triggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.
Triggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.
checksum_match
Checksum operation to perform.
equals
attribute_match
Triggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.
mode
File mode of file.
00644
attribute_match
Triggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.
mode_op
File mode operation to perform.
equals
attribute_match
Triggers if a filename exists in the container that has attributes that match those which are provided . This check has a performance impact on policy evaluation.
skip_missing
If set to true, do not fire this trigger if the file is not present. If set to false, fire this trigger ignoring the other parameter settings.
true
suid_or_guid_set
Fires for each file found to have suid or sgid bit set.
ignore dir
When set to true, the gate will not trigger if found on a directory. The default is false which will include evaluating directories as well as files
true
8.3.5 - Gate: image_source_drift
Introduction
The “image source drift” gate allows users to perform checks against the difference between an image source repo SBOM and the build image SBOM. The difference operates by “contains” relationships where the analyzed image SBOM is the base “target” and the source revisions are the “source” for calculation.
Example Use-cases
Scenario 1
Goal: Create a rule that results in a STOP action for missing packages in an image SBOM that were supposed to be present based from the image source SBOM.
Example rule set configuration in Anchore Enterprise
Goal: Create a rule that results in a STOP action for npm packages found in an image SBOM with versions lower than the ones specified in the image source SBOM.
Example rule set configuration in Anchore Enterprise
Checks to see if any packages have a lower version in the built image than specified in the input source sboms
package_types
Types of package to filter by
java,npm
package_removed
Checks to see if any packages are not installed that were expected based on the image’s related input source sboms
package_types
Types of package to filter by
java,npm
no_related_sources
Checks to see if there are any source sboms related to the image. Findings indicate that the image does not have a source sbom to detect drift against
8.3.6 - Gate: licenses
Introduction
The “licenses” gate allows users to perform checks against found licenses in an image SBOM and perform different
policy actions with available triggers.
Note: License names are normalized to SPDX format. Please refer to the SPDX License List for more information.
Example Use-case
Scenario 1
Goal: Create a rule that results in a STOP action for any “GNU packages” that are running on General Public License (GPL) version 2 or later.
Example rule set configuration in Anchore Enterprise
Triggers if the evaluated image has a package installed with software distributed under the specified (exact match) license(s).
licenses
List of license names to denylist exactly.
GPLv2+,GPL-3+,BSD-2-clause
denylist_exact_match
Triggers if the evaluated image has a package installed with software distributed under the specified (exact match) license(s).
package_type
Only trigger for specific package type.
all
denylist_partial_match
triggers if the evaluated image has a package installed with software distributed under the specified (substring match) license(s)
licenses
List of strings to do substring match for denylist.
LGPL,BSD
denylist_partial_match
triggers if the evaluated image has a package installed with software distributed under the specified (substring match) license(s)
package_type
Only trigger for specific package type.
all
8.3.7 - Gate: malware
Introduction
The “Malware” Policy Gate allows users to apply compliance rules when malware has been detected within an image.
Anchore Enterprise uses ClamAV during image analysis to detect malware. ClamAV is an open-source antivirus toolkit and can be used to detect various kinds of malicious threats on a system. For additional details, please see Malware Scanning
Please Note: Files in an image which are greater than 2GB will be skipped due to a limitation in ClamAV. Any skipped file will be identified with a Malware Signature as ANCHORE.FILE_SKIPPED.MAX_FILE_SIZE_EXCEEDED.
When performing Malware Scanning on these larger images, please expect an increase in your analysis time.
Reference: malware
Trigger
Description
Parameters
scans
Triggers if the malware scanner has found any matches in the image.
scan_not_run
Triggers if a file was skipped because it exceeded max file size.
Fire on Skipped Files
Example Use-case
Scenario 1
Goal: Create a rule that results in a STOP action if malware is detected on an image SBOM.
Example rule set configuration in Anchore Enterprise
Gate: malware Trigger: scans Action: STOP
8.3.8 - Gate: metadata
Introduction
The “metadata” gate provides users a variety of attributes to create policy rules that check against image SBOM metadata. Currently, the following attributes are provided in the “metadata” gate for policy rule creation:
size
architecture
os type
distro
distro version
like distro
layer count
Example Use-case
Scenario 1
Goal: Create a rule that results in a STOP action for an image SBOM containing alpine OS.
Example rule set configuration in Anchore Enterprise
Goal: Create a rule that results in a STOP action if libssl-dev packages are found in an image SBOM but running on a version other than 1.1.1-1ubuntu2.1~18.04.23.
Example rule set configuration in Anchore Enterprise
Gate: packages Trigger: metadata Optional Parameters: name = “libssl-dev”, name comparison = “=”, version = “1.1.1-1ubuntu2.1~18.04.23”, version comparison = “!=” Action: STOP
Reference: packages
Trigger Name
Description
Parameter
Description
Example
required_package
Triggers if the specified package and optionally a specific version is not found in the image.
name
Name of package that must be found installed in image.
libssl
required_package
Triggers if the specified package and optionally a specific version is not found in the image.
version
Optional version of package for exact version match.
1.10.3rc3
required_package
Triggers if the specified package and optionally a specific version is not found in the image.
version_match_type
The type of comparison to use for version if a version is provided.
exact
verify
Check package integrity against package db in the image. Triggers for changes or removal or content in all or the selected “dirs” parameter if provided, and can filter type of check with the “check_only” parameter.
only_packages
List of package names to limit verification.
libssl,openssl
verify
Check package integrity against package db in the image. Triggers for changes or removal or content in all or the selected “dirs” parameter if provided, and can filter type of check with the “check_only” parameter.
only_directories
List of directories to limit checks so as to avoid checks on all dir.
/usr,/var/lib
verify
Check package integrity against package db in the image. Triggers for changes or removal or content in all or the selected “dirs” parameter if provided, and can filter type of check with the “check_only” parameter.
check
Check to perform instead of all.
changed
denylist
Triggers if the evaluated image has a package installed that matches the named package optionally with a specific version as well.
name
Package name to denylist.
openssh-server
denylist
Triggers if the evaluated image has a package installed that matches the named package optionally with a specific version as well.
version
Specific version of package to denylist.
1.0.1
denylist
Triggers if the evaluated image has a package installed that matches the named package optionally with a specific version as well.
version comparison
The type of comparison to use for version if a version is provided.
>
metadata
Triggers on a package type comparison.
type
The type of package.
rpm
metadata
Triggers on a package name comparison.
name
The name of the package. Wildcards are supported.
*ssl
metadata
Triggers on a package version comparison.
version
The version of the package. Wildcards are supported.
*fips
8.3.10 - Gate: passwd_file
Introduction
The “passwd_file” gate allows users to perform checks against /etc/passwd files with the retrieve_files cataloger. For more information about cataloger scans, please click here.
Example Use-case
Scenario 1
Goal: Create a rule that results in a STOP action for username “foobar” that is found in /etc/passwd in values.yaml file.
Note In order to use this gate, the values.yaml file for your Anchore deployment must have usernames configured for deny listing.
Example rule set configuration in Anchore Enterprise
Triggers if the /etc/passwd file is not present/stored in the evaluated image.
denylist_usernames
Triggers if specified username is found in the /etc/passwd file
user_names
List of usernames that will cause the trigger to fire if found in /etc/passwd.
daemon,ftp
denylist_userids
Triggers if specified user id is found in the /etc/passwd file
user_ids
List of userids (numeric) that will cause the trigger to fire if found in /etc/passwd.
0,1
denylist_groupids
Triggers if specified group id is found in the /etc/passwd file
group_ids
List of groupids (numeric) that will cause the trigger ot fire if found in /etc/passwd.
999,20
denylist_shells
Triggers if specified login shell for any user is found in the /etc/passwd file
shells
List of shell commands to denylist.
/bin/bash,/bin/zsh
denylist_full_entry
Triggers if entire specified passwd entry is found in the /etc/passwd file.
entry
Full entry to match in /etc/passwd.
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
8.3.11 - Gate: retrieved_files
Introduction
The “retrieved_files” gate allows users to check against the content and/or presence of files retrieved at the time of analysis for an image SBOM. The intent of this gate is to allow users to utilize the retrieve_files cataloger in order to create policy rules from a configured file list. However, the usage of this gate depends on running the retrieve_files cataloger which will require more resrouces and time to perform analysis on the image SBOM. For more information about cataloger scans, please click here.
Note The “retrieved_files” gate differs from the “files” gate. The “retrieved_files” gate utilizes Anchore’s cataloger capability and checks against files that are provided and stored by the user, while the “files” gate checks against the files present in the analyzed image SBOM (ie file content, file names, filesystem attributes)
Example Use-case
Scenario 1
Goal: Create a rule that results in a STOP action if the regex “SSIEnabled” is not found in the content of the file in the path /etc/httpd.conf.
Example rule set configuration in Anchore Enterprise
Triggers if the specified file is not present/stored in the evaluated image.
path
The path of the file to verify has been retrieved during analysis
/etc/httpd.conf
content_regex
Evaluation of regex on retrieved file content
path
The path of the file to verify has been retrieved during analysis
/etc/httpd.conf
content_regex
Evaluation of regex on retrieved file content
check
The type of check to perform with the regex
match
content_regex
Evaluation of regex on retrieved file content
regex
The regex to evaluate against the content of the file
.SSlEnabled.
8.3.12 - Gate: secret_scans
Introduction
The “secret_scans” gate allows users to perform checks against secrets and content found in an image SBOM using configured regexes found in the “secret_search” section of the analyzer_config.yaml file.
In order to use this gate effectively, ensure that regexes are properly configured in the analyzer_config.yaml file in the Anchore deployment. By default, the following names are made available in the “secret_search” section:
Goal: Create a rule that results in a STOP action for disclosed AWS access key regex strings (that includes “/etc/.*) in an image SBOM.
Note In order to use this gate, the analyzer_config.yaml file for your Anchore deployment must have regexps named and configured. If none of the optional parameters are used for the policy rule, by default, all regexp_match that are configured in the analyzer_config.yaml file will be checked.*
Example rule set configuration in Anchore Enterprise
Gate: secret scans Trigger: content regex checks Optional Parameters: content regex name = “AWS_ACCESS_KEY”, filename regex = “/etc/.*”, match type = “found” Action: STOP
Reference: secret_scans
Trigger Name
Description
Parameter
Description
Example
content_regex_checks
Triggers if the secret content search analyzer has found any matches with the configured and named regexes. Checks can be configured to trigger if a match is found or is not found (selected using match_type parameter). Matches are filtered by the content_regex_name and filename_regex if they are set. The content_regex_name shoud be a value from the “secret_search” section of the analyzer_config.yaml.
content_regex_name
Name of content regexps configured in the analyzer that match if found in the image, instead of matching all. Names available by default are: [‘AWS_ACCESS_KEY’, ‘AWS_SECRET_KEY’, ‘PRIV_KEY’, ‘DOCKER_AUTH’, ‘API_KEY’].
AWS_ACCESS_KEY
content_regex_checks
Triggers if the secret content search analyzer has found any matches with the configured and named regexes. Checks can be configured to trigger if a match is found or is not found (selected using match_type parameter). Matches are filtered by the content_regex_name and filename_regex if they are set. The content_regex_name shoud be a value from the “secret_search” section of the analyzer_config.yaml.
filename_regex
Regexp to filter the content matched files by.
/etc/.*
content_regex_checks
Triggers if the secret content search analyzer has found any matches with the configured and named regexes. Checks can be configured to trigger if a match is found or is not found (selected using match_type parameter). Matches are filtered by the content_regex_name and filename_regex if they are set. The content_regex_name shoud be a value from the “secret_search” section of the analyzer_config.yaml.
match_type
Set to define the type of match - trigger if match is found (default) or not found.
found
8.3.13 - Gate: tag_drift
Introduction
If evaluating by image tag, the “tag_drift” gate allows users to perform checks against packages that have been changed (added, removed, modified) on an image SBOM from the tag’s previous image SBOM.
Example Use-case
Scenario 1
Goal: Create a rule that results in a STOP action for any packages that have been modified in an evaluated image tag’s SBOM from the tag’s previous evaluation results.
Example rule set configuration in Anchore Enterprise
Gate: tag drift Trigger: packages modified Action: STOP
Reference: tag_drift
Gate: Tag Drift
Compares the SBOM from the evaluated image’s tag and the tag’s previous image, if found. Provides triggers to detect packages added, removed or modified.
Trigger Name
Description
Parameter
Description
Example
packages_added
Checks to see if any packages have been added.
package_type
Package type to filter for only specific types. If ommitted, then all types are evaluated.
apk
packages_removed
Checks to see if any packages have been removed.
package_type
Package type to filter for only specific types. If ommitted, then all types are evaluated.
apk
packages_modified
Checks to see if any packages have been modified.
package_type
Package type to filter for only specific types. If ommitted, then all types are evaluated.
apk
8.3.14 - Gate: vulnerabilities
Introduction
The “vulnerabilities” gate provides users the ability to use either a single or combination of triggers and attributes that match against vulnerability metadata to create policies for the vulnerabilities discovered in an image SBOM.
Note Currently, only the following Triggers are available for Source Repository Rule Sets: - Denylist - Package - Stale Feed Data
Example Use-cases
Scenario 1
Goal: Create a rule that results in a STOP action for every critical vulnerability.
Example rule set configuration in Anchore Enterprise
Gate: vulnerabilities Trigger: package Required Parameters: package type = “all” Optional Parameters: severity comparison = “=”, severity = “critical” Recommendations (optional): “Remediation is required for critical vulnerabilities.” Action: STOP
Scenario 2
Goal: Create a rule that results in a STOP action for every vulnerability that is a part of CISA’s KEV list.
Example rule set configuration in Anchore Enterprise
Gate: vulnerabilities Trigger: kev list Recommendations (optional): “This vulnerability is part of CISA’s Known Exploited Vulnerability (KEV) catalogue. Remediation is required.” Action: STOP
Scenario 3
Goal: Create a rule that results in a WARN action for every critical vulnerability with a fix that will not be addressed by a vendor.
Example rule set configuration in Anchore Enterprise
Gate: vulnerabilities Trigger: package Required Parameters: package type = “all” Optional Parameters: severity comparison = “=”, severity = “critical”, vendor only = “false” Recommendations (optional): “Even though this is a critical vulnerability, the vendor indicates that a fix will not be addressed.” Action: WARN
Reference: vulnerabilities
Trigger Name
Description
Parameter
Description
Example
package
Triggers if a found vulnerability in an image meets the comparison criteria.
package_type
Only trigger for specific package type.
all
package
Triggers if a found vulnerability in an image meets the comparison criteria.
severity_comparison
The type of comparison to perform for severity evaluation.
>
package
Triggers if a found vulnerability in an image meets the comparison criteria.
severity
Severity to compare against.
high
package
Triggers if a found vulnerability in an image meets the comparison criteria.
cvss_v3_base_score_comparison
The type of comparison to perform for CVSS v3 base score evaluation.
>
package
Triggers if a found vulnerability in an image meets the comparison criteria.
cvss_v3_base_score
CVSS v3 base score to compare against.
None
package
Triggers if a found vulnerability in an image meets the comparison criteria.
cvss_v3_exploitability_score_comparison
The type of comparison to perform for CVSS v3 exploitability sub score evaluation.
>
package
Triggers if a found vulnerability in an image meets the comparison criteria.
cvss_v3_exploitability_score
CVSS v3 exploitability sub score to compare against.
None
package
Triggers if a found vulnerability in an image meets the comparison criteria.
cvss_v3_impact_score_comparison
The type of comparison to perform for CVSS v3 impact sub score evaluation.
>
package
Triggers if a found vulnerability in an image meets the comparison criteria.
cvss_v3_impact_score
CVSS v3 impact sub score to compare against.
None
package
Triggers if a found vulnerability in an image meets the comparison criteria.
fix_available
If present, the fix availability for the vulnerability record must match the value of this parameter.
true
package
Triggers if a found vulnerability in an image meets the comparison criteria.
vendor_only
If True, an available fix for this CVE must not be explicitly marked as wont be addressed by the vendor
true
package
Triggers if a found vulnerability in an image meets the comparison criteria.
max_days_since_creation
A grace period, in days, for a vulnerability match to be present after which the vulnerability is a policy violation. Uses the date the match was first found for the given image.
7
package
Triggers if a found vulnerability in an image meets the comparison criteria.
max_days_since_fix
If provided (only evaluated when fix_available option is also set to true), the fix first observed time must be older than the days provided, to trigger. Please note that days since fix begins when your Anchore Deployment first sees there is a fix available.
30
package
Triggers if a found vulnerability in an image meets the comparison criteria.
vendor_cvss_v3_base_score_comparison
The type of comparison to perform for vendor specified CVSS v3 base score evaluation.
>
package
Triggers if a found vulnerability in an image meets the comparison criteria.
vendor_cvss_v3_base_score
Vendor CVSS v3 base score to compare against.
None
package
Triggers if a found vulnerability in an image meets the comparison criteria.
vendor_cvss_v3_exploitability_score_comparison
The type of comparison to perform for vendor specified CVSS v3 exploitability sub score evaluation.
>
package
Triggers if a found vulnerability in an image meets the comparison criteria.
vendor_cvss_v3_exploitability_score
Vendor CVSS v3 exploitability sub score to compare against.
None
package
Triggers if a found vulnerability in an image meets the comparison criteria.
vendor_cvss_v3_impact_score_comparison
The type of comparison to perform for vendor specified CVSS v3 impact sub score evaluation.
>
package
Triggers if a found vulnerability in an image meets the comparison criteria.
vendor_cvss_v3_impact_score
Vendor CVSS v3 impact sub score to compare against.
None
package
Triggers if a found vulnerability in an image meets the comparison criteria.
package_path_exclude
The regex to evaluate against the package path to exclude vulnerabilities
.test.jar
package
Triggers if a found vulnerability in an image meets the comparison criteria.
inherited_from_base
If true, only show vulns inherited from the base, if false than only show vulns not inherited from the base. Don’t specify to include vulns from the base image and the current image. See Base Images for more details.
True
package
Triggers if a found vulnerability in an image meets the comparison criteria.
epss score
The EPSS score to compare against.
0.25
package
Triggers if a found vulnerability in an image meets the comparison criteria.
epss_score_comparison
The type of comparison to perform for EPSS base score evaluation.
>
package
Triggers if a found vulnerability in an image meets the comparison criteria.
epss percentile
The EPSS percentile to compare against.
87
package
Triggers if a found vulnerability in an image meets the comparison criteria.
epss percentile comparison
The type of comparison to perform for EPSS percentile evaluation.
>
denylist
Triggers if any of a list of specified vulnerabilities has been detected in the image.
vulnerability_ids
List of vulnerability IDs, will cause the trigger to fire if any are detected.
CVE-2019-1234
denylist
Triggers if any of a list of specified vulnerabilities has been detected in the image.
vendor_only
If set to True, discard matches against this vulnerability if vendor has marked as will not fix in the vulnerability record.
True
stale_feed_data
Triggers if the CVE data is older than the window specified by the parameter MAXAGE (unit is number of days).
max_days_since_sync
Fire the trigger if the last sync was more than this number of days ago.
10
vulnerability_data_unavailable
Triggers if vulnerability data is unavailable for the image’s distro packages such as rpms or dpkg. Non-OS packages like npms and java are not considered in this evaluation
None
None
None
kev_list_data_missing
Triggers if the KEV list data has not been synced.
None
None
None
kev_list
Triggers if any vulnerabilities are on the KEV list.
None
None
None
8.3.15 - Gate: always
Introduction
The “always” gate is intended for testing purposes and is advised against actual policy usage. The “always” gate only has one trigger that if it is part of a rule set, the policy evaluation will automatically result with the configured action (in most cases, “STOP”). This is especially useful when users want to test mappings and allowlists because they can use this rule in combination with other rules in a single rule set without having to manually create dedicated policies for running tests.
The gate will always trigger with the configured action if it is included inside an active policy.
Reference: always
Trigger Name
Description
Parameter
Description
Example
always
Fires if present in a policy being evaluated. Useful for things like deny-listing images or testing mappings and allowlists by using this trigger in combination with policy mapping rules.
8.4 - Policy Mappings
Introduction
The Mapping feature of the Policy Editor creates rules that define which
policies and allowlists should be used to perform the policy evaluation of a
source repository or container image based on the registry, repository name,
and tag of the image.
The policy editor lets you set up different policies that will be used on
different images based on the use case. For example the policy applied to a
web-facing service may have different security and operational best
practices rules than a database backend service.
A mapping has:
Registry - The registry url to match, including wildcards (e.g. ‘docker.io’, ‘quay.io’, ‘gcr.io’, ‘*’)
Repository - The repository name to match, including wildcards (e.g. ’library/nginx’, ‘mydockerhubusername/myrepositoryname’, ’library/*’, ‘*’)
Image - The way to select an image that matches the registry and repository filters
type: how to reference the image and the expected format of the ‘value’ property
“tag” - just the tag name itself (the part after the ‘:’ in a docker pull string: e.g. nginx:latest -> ’latest’ is the tag name)
“id” - the image id
“digest” - the image digest (e.g. sha256@abc123)
value: the value to match against, including wildcards
For example:
Field
Example
Description
Registry
registry.example.com
Apply mapping to the registry.example.com
Repository
anchore/web\*
Map any repository starting with web in the anchore namespace
Tag
*
Map any tag
In this example,an image named registry.example.com/anchore/webapi:latest
would match this mapping, and so the policy and allowlist configured for this
mapping would be applied.
Unlike other parts of the policy, Mappings are evaluated in order and will halt on the first matching rule. This is important to understand when combined with wildcard matches since it enables sophisticated matching behavior.
Note: The trusted images and denylisted images lists take precedence over
the mapping. See Allowed / Denied Images for
details.
It is recommended that a final catch-all mapping is applied to ensure that all
images are mapped to a policy. This catch-all mapping should specify wildcards
in the registry, repository, and tag fields.
8.4.1 - Container Image Mapping
Introduction
The container image policy mapping editor creates rules that define which
policies and allowlists should be used to perform the policy evaluation of an
image based on the registry, repository name, and tag of the image.
Create a New Image Container Mapping
From the Policies screen, click Mappings.
Under Container Images, click on the “Let’s add one!” button.
From the Add New Container Image Mapping dialog, add a name for the mapping, the policy for which the mapping will apply (added automatically), a registry, a repository, and a tag. You can optionally add an allowlist for the mapping.
Note Once you have created your first mapping, any mapping that is created afterwards will contain an additional optional field called Position. Image evaluation is performed sequentially from top to bottom. The system will stop at the first match, so the order or position of the mapping is important.
Field
Description
Name
A unique name to describe the mapping. For example: “Mapping for webapps”.
Position
Set the order for the new mapping.
Rule Sets
Rule Sets in the policy to be used for evaluation. A drop down will be displayed allowing selection of a single policy.
Allowlist(s)
Optional: The allowlist(s) to be applied to the image evaluation. Multiple allowlists may be applied to the same image.
Registry
The name of the registry to match. Note the name should exactly match the name used to submit the image or repo for analysis. For example: foo.example.com:5000 is different to foo.example.com. Wildcards are supported. A single * would specify any registry.
Repository
The name of the repository, optionally including namespace. For example: webapp/foo. Wildcards are supported. A single * would specify any repository. Partial names with wildcards are supported. For example: web*/*.
Tag
Tags mapped by this rule. For example: latest. Wildcard are supported. A single * would match any tag. Partial names with wildcards are supported. For example: 2018*.
Click OK to create the new mapping.
It is recommended that a final catch-all mapping is applied to ensure that all container images are mapped to a policy. This catch-all mapping should specify wildcards in the registry, repository, and tag fields.
Using the policy editor, you can set up different policies that will be used on different images based on use case. For example the policy applied to a web facing service may have different security and operational best practices rules than a database backend service.
Mappings are set up based on the registry, repository, and tag of an image. Each field supports wildcards. For example:
Field
Example
Description
Registry
registry.example.com
Apply mapping to the registry.example.com
Repository
anchore/web\*
Map any repository starting with web in the anchore namespace
Tag
*
Map any tag
In this example, an imaged named registry.example.com/anchore/webapi:latest would match this mapping, so the policy and allowlist configured for this mapping would be applied.
The mappings are applied in order, from top to bottom and the system will stop at the first match.
Note: The allowed images and denied images lists take precedence over the mapping. See Allowed / Denied Images for details.
8.4.2 - Source Repository Mapping
The source repository policy mapping editor creates rules that define which policies and allowlists should be used to perform the policy evaluation of a source repository based on the host, and repository name.
Organizations can set up multiple policies that will be used on different source repositories based on use case. For example the policy applied to a web facing service may have different security and operational best practices rules than a database backend service.
Mappings are set up based on the Host and Repository of a source repository. Each field supports wildcards.
Create a Source Repository Mapping
From the Policies screen, click Mappings.
Under Source Repositories, click on the “Let’s add one!” button.
From the Add New Source Repository Mapping dialog, add a name for the
mapping, choose the policy for which the mapping will apply, a host (such as github.com), and a repository. You can optionally add an allowlist for the mapping.
Note Once you have created your first mapping, any mapping that is created afterwards will contain an additional optional field called Position. Image evaluation is performed sequentially from top to bottom. The system will stop at the first match, so the order or position of the mapping is important.
Field
Description
Name
A unique name to describe the mapping.
Position
Optional: Set the order for the new mapping.
Policies
Name of policy to use for evaluation. A drop down will be displayed allowing selection of a single policy.
Allowlist(s)
Optional: The allowlist(s) to be applied to the source repository evaluation. Multiple allowlists may be applied to the same source
Host
The name of the source host to match. For example: github.com.
Repository
The name of the source repository, optionally including namespace. For example: webapp/foo. Wildcards are supported. A single * would specify any repository. Partial names with wildcards are supported. For example: web*/*.
Click OK to create the new mapping.
8.4.3 - Policy Mappings Example
Mappings in the policy are a set of rules, evaluated in order, that describe matches on an image, id, digest, or tag and the corresponding sets of policies and allowlists to apply to any image that matches the rule’s criteria.
Policies can contain one or more mapping rules that are used to determine which rule_sets and allowlists apply to a given image. They match images on the registry and repository, and finally be one of id, digest, or tag.
Examples
Example 1, all images match a single catch-all rule:
Example 2, all “official” images from DockerHub are evaluated against officialspolicy and officialsallowlist (made up names for this example), while all others from DockerHub will be evaluated against defaultpolicy and defaultallowlist , and private GCR images will be evaluated against gcrpolicy and gcrallowlist:
Example 3, all images from a unknown registry will be evaluated against defaultpolicy and defaultallowlist, and an internal registry’s images will be evaluated against a different set (internalpolicy and internalallowlist):
The result of the evaluation of the mapping section of a policy is the list of rule sets and allowlists that will be used for actually evaluating the image. Because multiple rule sets and allowlists can be specified in each mapping rule, you can use granular rule sets and allowlists and then combine them in the mapping rules.
Examples of schemes to use for how to split-up policies include:
Different policies for different types of checks such that each policy only uses one or two gates (e.g. vulnerabilities, packages, dockerfile)
Different policies for web servers, another for database servers, another for logging infrastructure, etc.
Different policies for different parts of the stack: os-packages vs. application package
8.5 - Testing Policies
Introduction
The Evaluation Preview feature allows you to perform a test evaluation on an image to verify the mapping, policies and allowlists used to evaluate an image.
To test an image you should enter the name of the image, optionally including the registry if the image is not stored on docker.io In the example below an evaluate was requested for library/debian:latest because no registry was specified the default, docker.io registry was used.
Here we can see that the image was evaluated against the policy named “anchore_security_only” and failed, resulting in a STOP action.
Clicking the “View Policy Test Details” will show a more detailed report.
The image was evaluated using the mapping named and the evaluation failed as the image was found in a denylist.
The next line explains that the image had been denylisted by the Deny CentOS denylist rule, however if the image was not denylisted, it would only have produced a WARN instead of a failure.
The subsequent table lists the policy checks that resulted in any Warn or Stop (failure) checks.
The policy checks are performed on images already analyzed and recorded in Anchore Enterprise. If an image has been added to the system but has not yet completed analysis, then the system will display the following error:
If the evaluation test is re-run after a few minutes, the image will likely have completed analysis and a policy evaluation result will be returned.
If the image specified has not been analyzed by the system and has not been submitted for analysis, then the following error message will be displayed.
Policies with AnchoreCTL
The anchorectl image check command can be used to evaluate a given image for policy compliance via the CLI rather than the UI. This will allow for the testing of images against policies in your pipeline and action to be taken depending on the result of the evaluation.
The image to be evaluated can be in the following format:
Image Digest
Image ID
registry/repo:tag
Below is an example of how you would use the command with an image in the registry/repo:tag format;
By default, only the summary of the evaluation is shown. Passing the --detail parameter will show the policy checks that raised warnings or errors.
# anchorectl image check docker.io/debian:latest --detail
✔ Evaluated against policy [failed] docker.io/debian:latest
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:35:05Z
Evaluation: fail
Final Action: stop
Reason: policy_evaluation
Policy Evaluation Details:
┌─────────────────┬─────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐
│ GATE │ TRIGGER │ DESCRIPTION │ STATUS │
├─────────────────┼─────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤
│ dockerfile │ instruction │ Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check │ warn │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (dpkg) - libgnutls30 (CVE-2011-3389 - https://security-tracker.debian.org/tracker/CVE-2011-3389) │ warn │
│ vulnerabilities │ package │ CRITICAL Vulnerability found in os package type (dpkg) - zlib1g (CVE-2022-37434 - https://security-tracker.debian.org/tracker/CVE-2022-37434) │ stop │
└─────────────────┴─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘
In this example we specified library/repo:tag which could be ambiguous. At the time of writing the image Digest for library/debian:latest was sha256:0fc..... however previously different images may have been tagged as library/debian:latest. The --history parameter can be passed to show historic evaluations based on previous images or previous policies.
Anchore supports allowlisting and denylisting images by their name, ID or digest. A denylist or allowlist takes precedence over any policy checks. For example if an image is explicitly listed as denylisted then even if all the individual policy checks pass the image will still fail evaluation. If you are unsure whether the image is hitting an allowlist/denylist, this information can be seen via the reason when you pass the --detail parameter.
# anchorectl image check docker.io/debian:latest --detail
✔ Evaluated against policy [failed] docker.io/debian:latest
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:39:36Z
Evaluation: fail
Final Action: stop
Reason: denylisted
Policy Evaluation Details:
┌─────────────────┬─────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐
│ GATE │ TRIGGER │ DESCRIPTION │ STATUS │
├─────────────────┼─────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤
│ dockerfile │ instruction │ Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check │ warn │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (dpkg) - libgnutls30 (CVE-2011-3389 - https://security-tracker.debian.org/tracker/CVE-2011-3389) │ warn │
└─────────────────┴─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘
In this example even though the image only had one policy check that raised a warning, the reason value shows that the image fails policy evaluation since it is present on a denylist.
Evaluating status based on Digest or ID
Evaluating an image specified by name as shown above is not recommended since an image name is ambiguous. For example the tag docker.io/library/centos:latest refers to whatever image has the tag library/centos:latest at the time of evaluation. At any point in time another image may be tagged as library/centos:latest.
It is recommended that images are referenced by their digest. For example at the time of writing the digest of the ‘current’ library/centos:latest image is sha256:191c883e479a7da2362b2d54c0840b2e8981e5ab62e11ab925abf8808d3d5d44
If the image to be evaluated is specified by Image ID or Image Digest then the --tag parameter must be added. Policies are mapped to images based on registry/repo:tag so since an Image ID may may to multiple different names we must specify the name user in the evaluation.
For example - referencing by Image Digest:
# anchorectl image check docker.io/debian@sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc --detail --tag docker.io/debian:latest
✔ Evaluated against policy [failed] docker.io/debian@sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:44:24Z
Evaluation: fail
Final Action: stop
Reason: denylisted
Policy Evaluation Details:
┌─────────────────┬─────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐
│ GATE │ TRIGGER │ DESCRIPTION │ STATUS │
├─────────────────┼─────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤
│ dockerfile │ instruction │ Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check │ warn │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (dpkg) - libgnutls30 (CVE-2011-3389 - https://security-tracker.debian.org/tracker/CVE-2011-3389) │ warn │
│ vulnerabilities │ package │ CRITICAL Vulnerability found in os package type (dpkg) - zlib1g (CVE-2022-37434 - https://security-tracker.debian.org/tracker/CVE-2022-37434) │ stop │
└─────────────────┴─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘
For example - referencing by image ID:
# anchorectl image check dd8bae8d259fed93eb54b3bca0adeb647fc07f6ef16745c8ed4144ada4d51a95 --detail --tag docker.io/debian:latest
✔ Evaluated against policy [failed] dd8bae8d259fed93eb54b3bca0adeb647fc07f6ef16745c8ed4144ada4d51a95
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:45:20Z
Evaluation: fail
Final Action: stop
Reason: denylisted
Policy Evaluation Details:
┌─────────────────┬─────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐
│ GATE │ TRIGGER │ DESCRIPTION │ STATUS │
├─────────────────┼─────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤
│ dockerfile │ instruction │ Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check │ warn │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (dpkg) - libgnutls30 (CVE-2011-3389 - https://security-tracker.debian.org/tracker/CVE-2011-3389) │ warn │
│ vulnerabilities │ package │ CRITICAL Vulnerability found in os package type (dpkg) - zlib1g (CVE-2022-37434 - https://security-tracker.debian.org/tracker/CVE-2022-37434) │ stop │
└─────────────────┴─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘
If you want to specify a policy to use for the evaluation other than the policy that is currently set as the default policy, you can use the parameter -p or --policy and then specify the policy ID.
In this example, you will see I specified a new policy at the end of the command using --policy. In this policy, I had added the image being evaluated to an allowlist, and you can see that the Evaluation, Final Action, and Reason have all changed to show the outcome of testing the image against this new policy.
The final feature to show a brief example of here is valuable for use in your pipeline and it is the -f or --fail-based-on-results parameter. When your image successfully passes evaluation, you will not see anything different when using this flag. However, when the image fails the the evaluation test, it causes the command itself to fail as if an error has been hit. This sets the command return code to 1 which can be used in your pipeline to differentiate between an image that passes and an image that fails and allows you to decide on the next action depending on this result.
$ anchorectl image check sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc --detail --tag docker.io/centos:latest --fail-based-on-results
✔ Evaluated against policy [failed] sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc
Tag: docker.io/centos:latest
Digest: sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc
Policy ID: anchore_secure_default
Last Evaluation: 2024-12-23T12:20:56Z
Evaluation: fail
Final Action: stop
Reason: denylisted
Policy Evaluation Details:
┌─────────────────┬─────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ GATE │ TRIGGER │ DESCRIPTION
│ ACTION │ RECOMMENDATION
│
├─────────────────┼─────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ vulnerabilities │ stale_feed_data │ The vulnerability feed for this image distro is older than MAXAGE (2) days
│ warn │ Please check the feed service logs. It appears the data has no been updated in the last 2 days which could suggest a problem. Details on Anchore Enterprise's feed service can be found in the documentation │
│ │ │
│ │ https://docs.anchore.com/current/docs/configuration/feeds/
│
│ │ │
│ │ If you are unable to resolve this issue via the documentation please contact Anchore support.
│
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (rpm) - vim-minimal-2:8.0.1763-15.el8 (fixed in: 2:8.0.1763-16.el8_5.4)(CVE-2021-4193 - https://access.redhat.com/security/cve/CVE-2021-4193) │ warn │ Packages with low, medium, and high vulnerabilities present can be upgraded to resolve these findings. If upgrading is not possible the finding should be added to an allowlist. │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (rpm) - libdnf-0.55.0-7.el8 (fixed in: 0:0.63.0-3.el8)(CVE-2021-3445 - https://access.redhat.com/security/cve/CVE-2021-3445) │ warn │ Packages with low, medium, and high vulnerabilities present can be upgraded to resolve these findings. If upgrading is not possible the finding should be added to an allowlist. │
...
error: 1 error occurred:
* failed policies: anchore_secure_default
In the example above, I have omitted some of the results in the table detailing the policies that the image failed on for ease of reading, but we can see in the evaluation status that this image is now failing evaluation again - however, now at the bottom of the output you can see a message error: 1 error occurred: * failed policies: anchore_secure_default this is flag recognising that the image has failed its evaluation and now is forcing the terminal process to return this error(therefore setting the return code to 1) and also providing the name of the policy being used that the image has failed against, which we can see in this example that the policy is just called anchore_secure_default.
8.6 - Allowed / Denied Images
Introduction
You can add or edit allowed or denied images for your policy rules.
The Allowed / Denied Images tab is split into the following two sub tabs:
Allowed Images: A list of images which will always pass policy evaluation irrespective of any policies that are mapped to them.
Denied Images: A list if images which will always fail policy evaluation irrespective of any policies that are mapped to them.
Add an Allowed or Denied Image to a Policy
If you do not have any allowed or denied images in your policy, click Let’s add one! to add them.
The workflow for adding Allowed or Denied Images is identical.
Images can be referenced in one of the following ways:
By Name: including the registry, repository and tag. For example: docker.io/library/centos:latest
The name does not have to be unique but it is recommended that the identifier is descriptive.
By Image ID: including the full image ID. For example: e934aafc22064b7322c0250f1e32e5ce93b2d19b356f4537f5864bd102e8531f
The full Image ID should be entered. This will be a 64 hex characters. There are a variety of ways to retrieve the ID of an image including using the AnchoreCTL, Anchore UI, and Docker command.
By Image Digest: including the registry, repository and image digest of the image. For example: docker.io/library/centos@sha256:989b936d56b1ace20ddf855a301741e52abca38286382cba7f44443210e96d16
Click OK to add the Allowed or Denied Image item to your policy.
See the following sections for more details about the Name, Image ID, and Image Digest.
For most use cases, it is recommended that the image digest is used to reference the image since an image name is ambiguous. Over time different images may be tagged with the same name.
If an image appears on both the Allowed Images and Denied Images lists, then the Denied Image takes precedence and the image will be failed.
The Allowed Images list will show a list of any allowed images defined by the system includes the following fields:
Allowlist Name
A user friendly name to identify the image(s).
Type
Describes how the image has been specified. By Name, ID, or Digest.
Image
The specification used to define the image.
Actions
The actions you can set for the allowed image.
The button can be used to copy the image specification into the clipboard.
An existing image may be deleted using the or edited by pressed the button.
Adding an Image by Image ID
The full Image ID should be entered. This will be a 64 hex characters. There are a variety of ways to retrieve the ID of an image including using the AnchoreCTL, Anchore UI and Docker command.
Using AnchoreCTL
$ anchorectl image get library/debian:latest | grep ID
ID: 8626492fecd368469e92258dfcafe055f636cb9cbc321a5865a98a0a6c99b8dd
Using Docker CLI
$ docker images --no-trunc debian:latest
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/debian latest sha256:8626492fecd368469e92258dfcafe055f636cb9cbc321a5865a98a0a6c99b8dd 3 days ago 101 MB
By default the docker CLI displays a short ID, the long ID is required and it can be displayed by using the –no-trunc parameter.
Note: The algorithm (sha256:) should not be entered into the Image ID field.
Adding an Image by Digest
When adding an image by Digest the following fields are required:
Registry. For example: docker.io
Repository. For example: library/debian
Digest. For example: sha256:de3eac83cd481c04c5d6c7344cd7327625a1d8b2540e82a8231b5675cef0ae5f
The full identifier for this image is: docker.io/library/debian@sha256:de3eac83cd481c04c5d6c7344cd7327625a1d8b2540e82a8231b5675cef0ae5f
Note: The tag is not used when referencing an image by digest.
There are a variety of ways to retrieve the digest of an image including using the AnchoreCTL, Anchore UI, and Docker command.
Using AnchoreCTL
$ anchorectl image get library/debian:latest | grep Digest
Digest: sha256:7df746b3af67bbe182a8082a230dbe1483ea1e005c24c19471a6c42a4af6fa82
Using Docker CLI
$ docker images --digests debian
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
docker.io/debian latest sha256:de3eac83cd481c04c5d6c7344cd7327625a1d8b2540e82a8231b5675cef0ae5f 8626492fecd3 1 days ago 101 MB
Note: Unlike the Image ID entry, the algorithm (sha256:) is required.
Adding an Image by Name
When adding an image by Name, the following fields are required:
Registry. For example: docker.io
Repository. For example: library/debian
Tag. For example: latest
Note: Wild cards are supported, so to trust all images from docker.io you would enter docker.io in the Registry field, and add a * in the Repository and Tag fields.
8.7 - Allowlists
Introduction
An allowlist contains one or more exceptions that can be used during policy
evaluation. For example allowing a CVE to be excluded from policy evaluation.
The Allowlist tab shows a list of allowlists present in the policy.
Allowlists are an optional element of the policy, and a policy may
contain multiple instances.
Add a New Allowlist
Click Add New Allowlist to create a new, empty allowlist.
Add a name for the allowlist. A name is required and should be unique.
Optional: Add a description. A description is recommended. Often the
description is updated as new entries are added to the allowlist to explain
any background. For example “Updated to account for false positive in glibc
library”.
Upload or Paste an Allowlist
If you have a JSON document containing an existing allowlist, then you can
upload it into Anchore Enterprise.
Click Import Allowlist to upload an allowlist. You can also
manually edit the allowlist in the native JSON format.
Drag an allowlist file into the dropzone. Or, you can click the “Add a
Local File” button and load it from a local filesystem.
Click OK to upload the allowlist. The system will perform a validation
for the allowlist. Only validated allowlists may be stored by Anchore
Enterprise.
Example allowlist:
{"id":"allowlist1","name":"My First Allowlist","comment":"A allowlist for my first try","version":"2","items":[{"gate":"vulnerabilities","trigger_id":"CVE-2018-0737+*","id":"rule1","expires_on":"2019-12-30T12:00:00Z"}]}
Copying a Allowlists
You can copy an existing allowlist, give it a new name, and use it for a policy
evaluation.
From the Tools drop down, select Copy Allowlist.
Enter a unique name for the allowlist.
Optional: Add a description. This is recommended. Often the description is
updated as new entries are added to the allowlist to explain any background.
Downloading Allowlists
You can download an existing allowlists as a JSON file. From the Tools drop
down, click Download to JSON.
Editing Allowlists
The Allowlists editor allows new allowlist entries to be created, and existing
entries to be edited or removed.
The components:
Gate: The gate the allowlist matches from (ensures trigger_ids are not matched in the wrong context).
Trigger Id: The specific trigger result to match and allowlist. This id is gate/trigger specific as each trigger may have its own trigger_id format. We’ll use the most common for this example: the CVE trigger ids produced by the vulnerability->package gate-trigger. The trigger_id specified may include wildcards for partial matches.
id: an identifier for the rule, must only be unique within the allowlist object itself.
Expires On: (optional) specifies when a particular allowlist item expires. This is an RFC3339 date-time string. If the rule matches, but is expired, the policy engine will NOT allowlist according to that match.
The allowlist is processed if it is specified in the mapping rule that was matched during policy evaluation and is applied to the results of the policy evaluation defined in that same mapping rule. If a allowlist item matches a specific policy trigger output, then the action for that output is set to go and the policy evaluation result notes that the trigger output was matched for a allowlist item by associating it with the allowlist id and item id of the match.
Choose an allowlist to edit, then click Edit.
Anchore Enterprise supports allowlisting any policy trigger, however the Allowlists editor currently supports only adding Anchore Security checks, allowing vulnerabilities to be allowlisted.
Choose a gate for the allowlist, for example, vulnerabilities.
A vulnerabilities allowlists entry includes two elements: A CVE / Vulnerability Identifier and a Package.
Enter a CVE / Vulnerability Identifier. The CVE/Vulnerability Identifier
field contains the vulnerability that should be matched by the allowlists.
This can include wildcards.
For example: CVE-2017-7246. This format should match the format of the CVEs
shown in the image vulnerabilities report. Wildcards are supported, however,
care should be taken with using wildcards to prevent allowlisting too many
vulnerabilities.
Enter a package. The package name field contains the package that should be
matched with a vulnerability. For example libc-bin.
Wildcards are also supported within the Package name field.
An allowlists entry may include entries for both the CVE and Package field
to specify an exact match, for example: Vulnerability: CVE-2005-2541
Package: tar.
In other cases, wildcards may be used where a multiple packages may match a
vulnerability. For example, where multiple packages are built from the same
source. Vulnerability: CVE-2017-9000 Package: bind-*
In this example the packages bind-utils, bind-libs and bind-license will all
be allowlisted for CVE-2017-9000.
Special care should be taken with wildcards in the CVE / Vulnerability
Identifier field. In most cases a specific vulnerability identifier will be
entered. In some exceptional cases a wild card in this field may be
appropriate.
A good example of a valid use case for a wildcard in the CVE / Vulnerability
Identifier field is the bind-license package. This package include a single
copyright text file and is included by default in all CentOS:7 images.
CVEs that are reported against the Bind project are typically applied to all
packages built from the Bind source package. So when a CVE is found in Bind
it is common to see a CVE reported against the bind-license package. To
address this use case it is useful to add an allowlists entry for any
vulnerability (*) to the bind-license package.
Optional: Click to edit an allowlist.
Optional: Click Remove to delete an allowlist.
Ensure that all changes are saved before exiting out of the Edit Allowlists
Items Page. At that point the edits will be sent to Anchore Enterprise.
8.8 - SBOM Drift
Software bill of materials (SBOM) drift is understanding how SBOMs change over time, and is a key part of managing your SBOMs. The nature of changes themselves may give early warning into unexpected behavior or intrusion into the build system that a review without context from previous builds would not easily be able to identify.
To do this, you set triggers for policy violations on changes in the SBOM between images with the same tag so that it can detect drift over time between builds of your images.
Gate: tag_drift
The triggers are:
packages_added
packages_removed
packages_modified
The “tag_drift” gate compares the SBOMs from the image being evaluated as input, and the SBOM of the image that precedes the input image with the requested tag provided for policy evaluation. The triggers in this gate evaluate the result to determine if packages were added, removed, or modified.
Trigger: packages_added
This trigger warns if a package was added to the SBOM.
Anchore SBOM is a set of capabilities in Anchore Enterprise that allow customers to gain comprehensive visibility into the software components present in both their internally developed and third-party supplied software to identify and mitigate security and compliance risks. It provides a centralized platform for viewing, managing, and analyzing Software Bill of Materials (SBOMs), including the capability to “Bring Your Own SBOMs” (BYOS) by importing SBOMs created outside of Anchore Enterprise and organizing them into groups, reflecting a logical organization structures for easier management, control, analysis, and reporting for enhanced collaboration across business and engineering functions.
Why Are SBOMs Important?
Modern software is complex and often built by distributed teams on a foundation of open-source and third-party components. Staying secure and compliant requires continuous, end-to-end insight into the software stack. That means knowing exactly what’s in your applications at every stage of the DevOps lifecycle—from code to cloud. This is where SBOMs (Software Bills of Materials) come in. SBOMs are machine-readable inventories that capture the full composition of applications by listing every package and dependency they include.
How Anchore Enterprise Uses SBOMs
SBOMs are essential to Anchore Enterprise as they contain the package information that enables both the vulnerability scanning and policy enforcement capabilities to function.
A software bill of materials (SBOM) is a comprehensive inventory of the individual packages from source repositories and container images. One or more SBOMs can be grouped in higher-level applications to visualize related artifacts in applications by version. Applications are the top-level building block in a hierarchical view, and can represent any project your teams deliver.
Generating SBOMs for Anchore-Managed Assets
You can generate SBOMs using AnchoreCTL as part of a command line or CI/CD workflow, through pulling content from a registry, or by submitting an artifact to the Anchore API.
SBOMs can be managed using the command line, API, or GUI, where contents can be grouped, annotated, viewed, or searched. Artifact metadata, vulnerability information, and policy evaluations can also be viewed and managed through the same interfaces.
Security Engineers are often required to investigate security issues that stem from a source repository or from container images. The security team can use Anchore Enterprise to identify any open source security vulnerabilities or policy evaluation results which originate from a source code repository or image container. This helps them catch security issues earlier.
Managing External SBOMs
Importing external SBOMs enables users to go beyond standard container analysis by incorporating SBOMs generated outside of Anchore, whether from other SCA tools or vendor sources, in turn ensuring comprehensive visibility across all components of their applications.
Importing External SBOMs
External SBOMs can be imported in SPDX, CycloneDX, and Syft native formats. Imported SBOMs are validated for proper schemas and to ensure they meet the necessary data requirements for vulnerability scanning.
The SBOM formats supported for upload via the experimental SBOM Management features are:
CycloneDX
JSON: Versions 1.2 - 1.6
XML: Versions 1.0 - 1.6
SPDX
JSON: Versions 2.2 - 2.3
Tag-Value: Versions 2.1 - 2.3
Syft
Note that SBOMs produced via anchorectl distributed analysis do not meet the specifications of the above formats and are not supported for external SBOM imports.
To import an external SBOM, navigate to the Imported SBOMs view via the left navigation panel. In the top-right of the page, there is an Import SBOM button that will allow you to select the SBOM document you wish to import into Anchore Enterprise.
In the Import an SBOM dialog below, provide the following information:
SBOM Name: A name for the SBOM document
Version: A version for the SBOM document
Groups: Optionally, select one or more SBOM groups with which to associate the imported SBOM
Annotations: Optionally, add any annotations you would like to store with the imported SBOM; an example would be a Vendor for the application or module represented by the SBOM
SBOM File: Select the SBOM document to import into Anchore Enterprise
Document Insights
When importing an external SBOM, Anchore Enterprise will calculate a set of document insights which describe the properties of the given SBOM document. These document insights are used to indicate various quality metrics for the given SBOM, and result in an overall SBOM Quality score.
Note that support for xml and tag-value formats is achieved by converting the stored document to Syft json before inspection, and therefore the document insights will be calculated based on the converted version.
The metrics currently included in the document insights are:
Valid Format
True if the given document can be identified as a valid SBOM of one of these formats:
CycloneDX
SPDX
Syft
Valid Schema
True if the filetype can be identified as one of:
json
xml
spdx (tag-value)
Supported Format:
True if the given document format is within the set of formats that Anchore Enterprise can inspect for further insights. This set is currently:
CycloneDX
SPDX
Syft
Supported Schema:
True if the given document filetype is within the set of filetypes that Anchore Enterprise can inspect for further insights. This set is currently:
json
xml
spdx (tag-value)
Artifacts Documented:
True if the given document contains a set of artifacts or packages.
CycloneDX
True if the document contains a components list of non-zero length.
SPDX
True if the document contains a packages list of non-zero length.
Syft
True if the document contains an artifacts list of non-zero length.
Dependencies Documented:
True if the given document contains a set of dependencies.
CycloneDX
True if the document contains a dependencies list of non-zero length.
SPDX
True if the document contains a relationships list of non-zero length.
Syft
True if the document contains an artifactRelationships list of non-zero length.
Author Documented:
True if the given document contains metadata on the author of the document.
CycloneDX
True if the metadata object of the given document contains either a non-null manufacturer value or an authors list of non-zero length.
SPDX
True if the creationInfo object of the given document contains a creators list of non-zero length.
Syft
Not present in the Syft specification.
Supplier Documented:
True if the given document contains metadata on the supplier of the artifacts.
CycloneDX
True if the metadata object of the given document contains a non-null supplier value.
SPDX
True if entries in the packages list of the given document contain a supplier value that is not empty and is not equal to NOASSERTION.
Syft
Not present in the Syft specification.
Document Timestamp:
True if the given document contains metadata on the creation date-time of the document.
CycloneDX
True if the metadata object of the given document contains a non-null timestamp value.
SPDX
True if the creationInfo object of the given document contains a non-null created value.
Syft
Not present in the Syft specification.
SBOM Quality:
The percentage of the above metrics that are True for the given document.
Organizing Imported SBOMs
Imported SBOMs can be placed into groups to reflect logical organization structures. This can be done as part of an SBOM import, or later once an SBOM has already been imported into Anchore Enterprise.
The SBOM Group Summary view shows the list of associated imported SBOMs along with their key attributes. The group SBOM Quality score shows the average SBOM Quality scores across all of the constituent SBOMs.
Viewing SBOM Contents
To view the contents for an imported SBOM, click on the Contents tab as shown below:
A list of packages, versions, and associated licenses are displayed representing the contents of the current SBOM.
Vulnerability Scanning for Imported SBOMs
Once uploaded, imported SBOMs are placed into a queue to be scanned for vulnerabilities.
Once scanned, vulnerability results can be viewed for a particular imported SBOM or SBOM Group by clicking on the Vulnerabilities tab as shown below:
The list of vulnerabilities can be filtered using the following criteria:
Vulnerability Age: select the number of days since the last time a particular vulnerability has been reported
Minimum Severity: select the desired minimum CVSS severity
Minimum CVSS Score: select the desired minimum CVSS score
The Reset Filters button can be used to revert all filters back to their default values.
The Anchore Rank column provides a sequence value for each vulnerability that can be used to prioritize vulnerability review and remediation. This sequence value is based on the new Anchore Score, which is a composite security index comprised of the CVSS Score and Severity, EPSS percentage, and CISA KEV status. The higer the value, the more significant the vulnerability and the higher the priority to address it.
The Export CSV button in the top-right can be used to export all data for the filtered set of vulnerabilities into a CSV file.
Please note that while the UI only displays the key vulnerability fields and limits the number of displayed vulnerabilities to 100, the CSV data file will include all data fields for the complete set of vulnerabilities matching the filter criteria. Furthermore, the CSV data file will include a record for each vulnerability instance per the affected package and the SBOM containing the affected package.
Performance Considerations
Though no explicit limits are enforced, 10,000 SBOMs and 1,000 Groups have been used as the target for optimal performance in this release.
The SBOM scanning queue has been optimized to facilitate a full rescan of your imported SBOM inventory every 6 hours. SBOMs are scanned using a First In, First Out (FIFO) queue, meaning that the oldest SBOMs in the queue are scanned first.
SBOM Management API (Experimental)
Appropriate user permissions are required to access these API endpoints.
You can use the Anchore Enterprise GUI to see a summary of the applications that have been collected into an application. From the application view, you can drill down into the source repositories or container images that make up the application, and browse their software bill of materials (SBOMs).
To work with image container data, you must first load the image container data into the Application view of Enterprise. Once your data is brought in, you can go to Applications to see the summary of the information. The information is categorized by applications, with sub-categories of application versions available from container images.
For information about analyzing images, see Image Analysis.
For information about adding Images, see Scanning Repositories.
When you select an application version, you will see a list of artifacts associated with that application version.
You can download a report for everything in the application or for an individual artifact. The application level download supports JSON format. Artifact level download supports JSON, SPDX, CycloneDX formats.
When you select an artifact link, you will see the analysis options for that artifact. You can then view information about the artifact, such as the policies set up, the vulnerabilities, software bill of materials (SBOM) contents, image metadata information, build summary, and action workbench.
If you want to set up policies, as well as mappings for an artifact, select *Policies to set them up there.
9.1.2 - View Applications from Source Repositories
To work with source repository data, you must first use AnchoreCTL or the Anchore API to load the source repository into the Applications view of Enterprise.
Once your data is brought in, you can go to the Applications tab to see the summary of the information. The information is categorized by applications, with sub-categories of application versions available from source repositories.
When you select an application version, you will see a list of artifacts associated with that application version.
You can download a report for everything in the application or for an individual artifact. The application level download supports JSON format. Artifact level download supports JSON, SPDX, CycloneDX formats.
When you select an artifact link, you will see the analysis options for that artifact. You can then view information about the artifact, such as the policies set up, the vulnerabilities, software bill of materials (SBOM) contents, and source metadata information.
If you want to set up policies, as well as mappings for an artifact, select the Policies tab and set them up there.
For information about policies, see Policies.
For information about adding policy mapping, see Policy Mappings.
9.1.3 - Work with Applications Generated from Image Containers
To work with image container data in Anchore Enterprise, you must first load the image container data into Enterprise. For more information, see Scanning Repositories.
Once the data is made available to Anchore Enterprise, you can then analyze it. An example workflow might be as follows.
Start Anchore Enterprise. You will default to the dashboard view. The Dashboard is your configurable landing page where insights into the collective status of your container image environment. The summary information is displayed through various widgets. Utilizing the Enterprise Reporting Service, the widgets are hydrated with metrics which are generated and updated on a cycle, the duration of which is determined by application configuration. See Dashboard for more information about what you can view.
Click Applications > Container Images to view a summary of the applications in your container images. The information is categorized by applications, with sub-categories of application versions available from image containers that you previously loaded. Notice the list of applications and application versions, as well as any artifacts in the applications.
Click SBOM Report Updated Application name to download a software bill of materials (SBOM) report in JSON format for everything in an application. Or, click SBOM Report to download a report for everything in an artifact.
Click an artifact link under Repository Name to view the detailed information for the artifact.
The Images analysis screen for an artifact shows you a summary of what is in that artifact.
From the analysis screen, you can perform the following actions.
Click Policy Compliance to view the policies set up for the artifact. You can see the policy rules that are set up as well.
Click Vulnerabilities to view the vulnerabilities associated with the artifact.
Click SBOM to view the contents of the SBOM(s) associated with the artifact.
Click Image Metadata to view the metadata information for the artifact.
Click Build Summary to see the Manifest, Dockerfile, and Docker History of your artifact.
Click Action Workbench to see the action plans and history for an image artifact.
You have the option to click SBOM Report to download a report for everything in the artifact.
You also have the option to click Compliance Report to download a report that shows the compliance information in the artifact.
Click Policies to set up the rules for the analyzed container image.
9.1.4 - Work with Applications Generated from Source Repositories
To work with source repository data in Anchore Enterprise, you must first use AnchoreCTL or the Anchore API to load the source repository into Enterprise.
Once the data is made available to Anchore Enterprise, you can then view and generate reports specific to an application version. An example workflow might be as follows.
Start Anchore Enterprise. You will default to the dashboard view. The Dashboard is your configurable landing page where insights into the collective status of your source repository. The summary information is displayed through various widgets. Utilizing the Enterprise Reporting Service, the widgets are hydrated with metrics which are generated and updated on a cycle, the duration of which is determined by application configuration. See Dashboard for more information about what you can view.
Click Applications > Source Repositories to view a summary of the applications in your source repository. The information is categorized by applications, with sub-categories of application versions available from source repositories that you previously loaded via AnchoreCTL or the Anchore API. Notice the list of applications and application versions, as well as any artifacts in the applications.
Click SBOM Report Updated Application name to download a software bill of materials (SBOM) report in JSON format for everything in an application. Or, click SBOM Report to download a report for everything in an artifact.
Click an artifact link for a soure repository under Repository Name to view the detailed information for the artifact.
The Sources analysis screen for an artifact shows you a summary of what is in that artifact.
From the analysis screen, you can perform the following actions.
Click Policy Compliance to view the policies set up for the artifact. You can see the policy rules that are set up as well.
Click Vulnerabilities to view the vulnerabilities associated with the artifact.
Click SBOM to view the contents of the SBOM(s) associated with the artifact.
Click Source Metadata to view the metadata information for the artifact.
You have the option to click SBOM Report to download a report for everything in the artifact.
You also have the option to click Compliance Report to download a report that shows the compliance information in the artifact.
Click Policies to set up the rules for the analyzed source repository. The rules set up for an artifact source repository are different from what you apply to a container image.
Anchore Enterprise lets you model your versioned applications to create a comprehensive view of the vulnerability and security health of the projects your teams are building across the breadth of your Software Delivery Lifecycle.
By grouping related components into applications, and updating those components across application versions as projects grow and change, you can get a holistic view of the current and historic security health of the applications from development through built image artifacts.
The typical flow is:
An application will be created for each project that you want to track.
Versions will be created both for the current in-development version and for previous versions.
Artifacts will be grouped together under those application versions.
Applications, application versions, and artifact associations can be managed via either the applications API or AnchoreCTL.
Applications are the top-level building block in this hierarchical view, containing artifacts like packages or image artifacts. Applications can represent any project your teams deliver. Applications have user-specified name and description fields to describe them. Applications are expected to be long-lived constructs, typically with multiple versions added over time.
Application Versions
Each application is associated with one or more application versions. Application versions track the specific grouping of artifacts that comprise a product version. They have one directly user-editable field called version_name which reflects the name of the product’s application version. This field has no special constraints on it, so you can use it to reflect the versioning scheme or schemes for your projects.
Each application, on creation, automatically has one application version created for it, named “HEAD”. “HEAD” is a special version meant to track the in-development version of your product through its release. A typical flow is that, as your CI jobs build new versions of your software, they will add new versions of your source and image artifacts to Anchore Enterprise and associate them with your HEAD application version. On release, you update your “HEAD” version to reflect the actual name of your release (for example, “v1.0.0”), and then create a new “HEAD” version to track development on the next version of your project. Any application version, including the “HEAD” version, can be deleted if needed.
Application versions, rather than applications, are directly associated with artifacts from sources and images. As your project grows and evolves, the packages and package versions associated with it will naturally change and advance over time. Associating them with application versions (rather than directly with applications) allows older application versions to maintain their associations with the older packages that compose them. This allows for historical review auditing and comparison across versions.
Associating Artifacts with Application Versions
An artifact is a generic term that encompasses any SDLC artifact that can be associated with an application version. Currently, that includes sources and images. The application API has endpoints (and AnchoreCTL has subcommands) to manage the associations between application versions and artifacts.
One important distinction is that these endpoints and commands are operating on the association between artifacts and application versions, not on the artifacts themselves. A source or image must already be added to Anchore Enterprise before it can be associated with an application. Similarly, removing the association with an application version does not remove the artifact from Anchore Enterprise. It can later be re-associated with the application version, or another application version.
Application Version software bill of materials (SBOM)
Once an application version has artifacts associated with it, users can generate an application version SBOM, which aggregates the SBOMs for all of the artifacts associated with the application version.
Application Version Vulnerabilities
Users can generate a list of vulnerabilities within an application version. This will be an aggregate of all
vulnerabilities found within the artifacts associated with the specific application version.
9.2.2 - Application Features with the Anchore Enterprise GUI
Anchore Enterprise lets you use the UI to see a summary of the applications available from source repositories. You can perform an analysis of the application and artifact data.
Additionally, you can set your policies and mappings for a source repository, similar to how you set them up for images.
Note: Creating an application will also create an application version named HEAD, used to track the in-development version.
GET the List of All Applications
GET the list of all applications from http://<host:port>/v2/applications/.
Add the include_versions=true flag to include all application versions under each application in the API response.
GET a Single Application
GET a single application by adding the application_id to the GET command. For example: http://<host:port>/v2/applications/<application_id>/.
Add the include_versions=true flag to include all application versions under each application in the API response.
Update an Existing Application
PUT the following to http://<host:port>/v2/applications/<application_id>/ to update an existing application, such as changing the name and description.
Send a DELETE to http://<host:port>/v2/applications/<application_id>/ to remove the specified application.
9.2.3.1 - Application Version Management - Anchore API
Use the Anchore API to manage your application versions. For more information about using Anchore APIs via Swagger, see: Using the Anchore API.
The API application workflow would be like the following.
Create an Application Version
To use the Anchore API to create an application version that is associated with an already-existing application, POST the JSON in the block below to http://<host:port>/v2/applications/<application_id>/versions/.
{
"version_name": "v1.0.0"
}
GET the List of All Application Versions
GET the list of all application versions for the application from http://<host:port>/v2/applications/<application_id>/ versions.
GET a Single Application Version
GET a specific application version from http://<host:port>/v2/applications/<application_id>/versions/<application_version_id>.
Update an Existing Application
To update the name of an existing application version, PUT the following to http://<host:port>/v2/applications/<application_id>/versions/<application_version_id>
{
"version_name": "v1.0.1"
}
Remove a Specified Application Version
To delete an application version, Send a DELETE to http://<host:port>/v2/applications/<application_id>/versions/<application_version_id>.
9.2.3.2 - Associate Artifacts with Application Versions - Anchore API
Add an Artifact Association
The following commands require source or image artifacts to already be added to the Anchore Enterprise instance before they can be associated with the application version.
Note: Keep track of the uuid of the sources, and the digest of the images that you will add to the application version. These are the values used to associate each artifact with the application version.
The response body for each artifact association request will contain an artifact_association_metadata block with an association_id field in it. This field uniquely identifies the association between the artifact and the application version, and is used in requests to remove the association.
Associate a Source Artifact
To associate a source artifact, POST the following body to http://<host:port>/v2/applications/<application_id>/versions/<application_version_id>/artifacts.
Note the fields specific to source artifacts in contrast to the image artifact in the next example.
To associate an image artifact, POST the following body to http://<host:port>/v2/applications/<application_id>/versions/<application_version_id>/artifacts.
Note the fields specific to image artifacts in contrast to the source artifact in the previous example.
Each artifact in the response body will contain an artifact_association_metadata block with an association_id field in it. This field uniquely identifies the association between the artifact and the application version, and is used in requests to remove the association.
List All Artifacts Associated with an Application Version
To list all artifacts associated with an application version, GET http://<host:port>/v2/applications/<application_id>/versions/<application_version_id>/artifacts.
Filter the Results by Artifact Type
To filter the results by artifact type, add the artifact_types=<source,image> query parameter.
Remove an Artifact Association
Send a DELETE request to http://<host:port>/v2/applications/<application_id>/versions/<application_version_id>/artifacts/<association_id>.
9.2.3.3 - Application Version Operations - Anchore API
Users can perform queries against specific versions of an application.
SBOM for a specific Application Version
Using the application API to generate a combined software bill of materials (SBOM) for all artifacts within an
application version. This lets you easily archive the components, or provide them to others for verification
process compliance requirements. The data structure metadata for the application and application version,
along with the SBOMs for each artifact associated with the application version.
Download a Combined SBOM
To download a combined SBOM, GET the application version SBOM from http://<host:port>/v2/applications/<application_id>/versions/<application_version_id>/sboms/native-json.
To filter the results by artifact type, add the artifact_types=<source,image> query parameter.
Vulnerabilities for a specific Application Version
Using the application API, a user can generate a combined list of vulnerabilities found among all artifacts within an
application version. This allows easier vulnerability management for any Application Version.
Optional query parameter of will_not_fix=<true | false> is provided. When true, the results will include any vulnerabilities
that the vendor of an image distribution either disagrees with or does not intend to prioritize for remediation
9.2.4 - Application Management - AnchoreCTL
Use AnchoreCTL to manage your applications. The AnchoreCTL application workflow would be like the following.
Create a Named Application
Use AnchoreCTL to create a named application. For example: anchorectl application add <name> --description <description>
Note: Creating an application will also create an application version named HEAD, used to track the in-development version.
List All Applications
Use the AnchoreCTL to list all applications. For example: anchorectl application list.
Request an Individual Application
Request an individual application from Anchore via AnchoreCTL to view details about it. For example:
anchorectl application get <application_name>.
Update and Change Properties of an Existing Application
Update and change the properties of an existing application via AnchoreCTL.
For example, change the application name and description as follows: anchorectl application update <application_name> --name <new_name> --description <new_description>.
Remove an Application
Use AnchoreCTL to delete applications. This lets you remove applications that are no longer useful or important to you. For example:
anchorectl application delete <application_name>
9.2.4.1 - Application Version management - AnchoreCTL
Use AnchoreCTL to manage your application versions.
The AnchoreCTL application workflow would be like the following.
Create and Store Versions of your Application
Use AnchoreCTL to create and store versions of your applications. Versioning is useful for audit compliance and reporting. Use the following AnchoreCTL command to create a version:
anchorectl application version add <application-name>@<version-name>
List All Application Versions
Use AnchoreCTL to list all application versions that are associated with an application.
anchorectl application version list <application_name>
Update Application Version Properties
Use AnchoreCTL to update application version properties for an existing application in Anchore.
anchorectl application version update <application-name>@<version-name> --name <new_version_name>
Request a Specific Application Version
Use AnchoreCTL to request a specific version of an application to view its details. The following example shows the AnchoreCTL command to request a version:
anchorectl application version get <application-name>@<version-name>
Remove Application Version
Use AnchoreCTL to delete application versions. This lets you remove application versions that are no longer useful or important to you.
anchorectl application version delete <application-name>@<version-name>
9.2.4.2 - Get an Application Version SBOM - AnchoreCTL
Run the anchorectl application version sbom <application_id> <application_version_id> -o json command to download a combined software bill of materials (SBOM) for all components and supply-chain elements of an application. This lets you easily archive the components, or provide them to others for verification process compliance requirements. The data structure includes the version and version metadata for the application version, along with the SBOMs for each associated artifact.
To filter the results by artifact type, add the argument –-type <source,image> to the end of the command.
9.2.4.3 - Associate Artifacts with Application Versions - AnchoreCTL
Add an Artifact Association
The following commands require source or image artifacts to already be added to the Anchore Enterprise instance before they can be associated with the application version.
Note: Keep track of the uuid of the sources, and the digest of the images that you will add to the application version. These are the values used to associate each artifact with the application version.
The response body for each artifact association request will contain an artifact_association_metadata block with an association_id field in it. This field uniquely identifies the association between the artifact and the application version.
Associate a Source Artifact
To associate a source artifact:
anchorectl application artifact add <application-name>@<version-name> source <source_uuid>
Associate an Image Artifact
To associate an image artifact:
anchorectl application artifact add <application-name>@<version-name> image <image_digest>
List All Associated Artifacts
To list all artifacts associated with an application version:
anchorectl application artifact list <application-name>@<version-name>
To filter the results by artifact type, add the argument --type <source,image> to the end of the command.
Remove an Artifact Association
Get the association_id of one of the associated artifacts and run the following command:
anchorectl application artifact remove <application-name>@<version-name> <artifact_id>
9.3 - Generating SBOMs for a Source Repository using the API
Use the Anchore API to import a source repository artifact from a software bill of materials (SBOM) file on disk. You can also get information about the source repository, investigate vulnerability packages by requesting vulnerabilities for a single analyzed source repository, or get any policy evaluations.
For more information about using Anchore APIs via Swagger, see: Using the Anchore API.
The SBOM management API workflow would generally be as follows.
Note: Reference the API endpoints in Swagger for the latest information.
Once you have generated a SBOM using anchorectl, you can use the API to import that SBOM as a source artifact. For example, to create the import “operation” (job) for importing a source.
The package override feature for license data, allows the user to override license information for specific packages
in their Anchore Enterprise deployment.
License Summary Document
Anchore Enterprise provides a per image license summary document which provides a list of packages and the available license
data. Each package listed, provides a list of license identifiers that were detected at the time of analysis. When the
license identifier is a valid SPDX expression, additional fields will be included such as the license name, text, header, url and copyright.
This document can be retrieved via the GET /v2/images/{image_digest}/content/licenses endpoint.
Note: Any license override data will be reflected in the license summary document.
Creating an Override of License Data
Note: Any overrides provided will be used globally on any image that contains that specified package.
For a specific package, identified by its PURL, you can override the license data provided by the upstream package
manager. Any license field that is used in the override request, will be used in place of the license data originally found.
This is done with a POST to the endpoint /exp/system/package-overrides/licenses.
An override may be targeted to one or more of the license(s) that the package originally showed. Or the override can
target specific license data such as the license text, header, url or copyright.
License Override RBAC Role
A new RBAC role has been added to support the license override feature. The license-override role can be conferred to
any user. They will be able to create, update and delete license overrides. The license-override role has a domain value
of system and a resource value of license-overrides. The license-override role is not required to view the license summary document.
Any user with role system-admin will also have permissions to create, update and delete license overrides.
Anchore Experimental API
Please review for a complete description of the license override feature within the Experimental API.
The license override feature is part of the Experimental API and is subject to change.
10 - Troubleshooting
This section contains some general troubleshooting for your Anchore Enterprise instance. When troubleshooting Anchore Enterprise, the recommended approach is to first verify all Anchore services are up, use the event subsystem to narrow down particular issues, and then navigate to the logs for specific services to find out more information.
Throughout this section, AnchoreCTL commands will be executed to assist with troubleshooting. For more information AnchoreCTL, please reference the AnchoreCTL section.
This term typically refers to a testing methodology which validates critical or crucial functionality of software. Versions of AnchoreCTL post-5.6.0 include a smoke-tests option, which can be used to validate general functionality of your Anchore Enterprise.
We recommend using this mechanism to validate functionality after upgrades.
Tip: the test check-admin-credentials will look for an admin user in the admin account context as defined in your anchorectl.yaml
./anchorectl system smoke-tests run
⠇ Running smoke tests
...
✔ Ran smoke tests
┌───────────────────────────────────────┬─────────────────────────────────────────────────┬────────┬────────┐
│ NAME │ DESCRIPTION │ RESULT │ STDERR │
├───────────────────────────────────────┼─────────────────────────────────────────────────┼────────┼────────┤
│ wait-for-system │ Wait for the system to be ready │ pass │ │
│ check-admin-credentials │ Check anchorectl credentials to run smoke tests │ pass │ │
│ create-test-account │ Create a test account │ pass │ │
│ list-test-policies │ List the test policies │ pass │ │
│ get-test-policy │ Get the test policy │ pass │ │
│ activate-test-default-policy │ Activate the test default policy │ pass │ │
│ create-test-image │ Create a test image and wait for analysis │ pass │ │
│ get-test-image │ Get the test image │ pass │ │
│ activate-test-subscription │ Activate a test subscription │ pass │ │
│ get-test-subscription │ Get the test subscription │ pass │ │
│ deactivate-test-vuln-subscription │ Deactivate the vuln subscription │ pass │ │
│ deactivate-test-policy-subscription │ Deactivate the policy subscription │ pass │ │
│ deactivate-test-tag-subscription │ Deactivate the tag subscription │ pass │ │
│ deactivate-test-analysis-subscription │ Deactivate the analysis subscription │ pass │ │
│ check-test-image │ Check the test image │ pass │ │
│ get-test-image-vulnerabilities │ Get the test image vulnerabilities │ pass │ │
│ delete-test-image │ Delete the test image │ pass │ │
│ disable-test-account │ Disable the test account │ pass │ │
│ delete-test-account │ Delete the test account │ pass │ │
└───────────────────────────────────────┴─────────────────────────────────────────────────┴────────┴────────┘
10.2 - Viewing Logs
Anchore services produce detailed logs that contain information about user interactions, internal processes, warnings and errors. The verbosity of the logs is controlled using the logging.log_level setting in config.yaml (for manual installations) or the corresponding ANCHORE_LOG_LEVEL environment variable (for docker compose or Helm installations) for each service.
The log levels are DEBUG, INFO, WARNING, ERROR, and CRITICAL, where the default is INFO. Most of the time, the default level is sufficient as the logs will container WARNING, ERROR and CRITICAL messages as well. But for deep troubleshooting, it is always recommended to increase the log level to DEBUG in order to ensure the availability of the maximum amount of information.
Anchore logs can be accessed by inspecting the docker logs for any anchore service container using the regular docker logging mechanisms, which typically default to displaying to the stdout/stderr of the containers themselves - for example:
The logs themselves are also persisted as logfiles inside the Anchore service containers. Executing a shell into any Anchore service container and navigating to /var/log/anchore, you will find the service log files. For example, using the same analyzer container service as described previously.
If you’ve successfully verified that all Anchore Enterprise services are up, but are still running into issues operating Anchore, a good place check is the event log.
The event log subsystem provides users with a mechanism to inspect asynchronous events occurring across various Anchore Enterprise services. Anchore events include periodically-triggered activities such as vulnerability data feed sync in the policy_engine service, image analysis failures originating from the analyzer service, and other informational or system fault events. The catalog service may also generate events for any repositories or image tags that are being watched when Anchore Enterprise encounters connectivity, authentication, authorization, or other errors in the process of checking for updates.
The event log is aimed at troubleshooting most common failure scenarios, especially those that happen during asynchronous operations, and to pinpoint the reasons for failures that can be used subsequently to help with corrective actions. Events can be cleared from Anchore Enterprise in bulk or individually.
Viewing Events
Running the following command will give a list of recent Anchore events: anchorectl event list
If you would like more information about a specific event, you can run the following command: anchorectl event get <event-id>
# Details about a specific Anchore event
# anchorectl event get 1eb04509b2bc44208cdc7678eaf76fef
✔ Fetched event
UUID: 1eb04509b2bc44208cdc7678eaf76fef
Event:
Event Type: user.image.analysis.completed
Level: info
Message: Image analysis available
Resource:
Resource ID: docker.io/ubuntu:latest
Resource Type: image_tag
User Id: admin
Source:
Source Service: analyzer
Base Url: http://analyzer:8228
Source Host: anchore-quickstart
Request Id:
Timestamp: 2022-08-24T22:06:13.736004Z
Category:
Details:
Created At: 2022-08-24T22:06:13.832881Z
Note: Depending on the output from the detailed events, looking into the logs for a particular servicename (example: policy_engine) is the next troubleshooting step.
10.4 - Data Syncer
Anchore Enterprise runs a hosted data service called the Anchore Data Service. This service publishes datasets from a number of provider sources.
The Data Syncer Service is a core component of Enterprise. Its job is to periodically query Anchore Data Service and download any new datasets available.
Performing a basic health check
Run $ anchorectl feed list as admin and ensure that:
The last sync date shown is recent and that the feed has enabled set to true.
Run $ anchorectl feed sync as admin which will:
Queue an update to fetch the data from the data service and propagate feed data across internal services.
Otherwise, this runs on a regular schedule.
You can also visually check the health in the ‘System’ section of the UI when logged in as admin.
Configuration checks
Check that the feed pod/container has enough disk space:
Storage
Ensure your data syncer pod has enough storage (Around 2 Gb of writable space) to cache the datasets to disk, this reduces database queries.
Memory
Ensure the data syncer pod has sufficient memory (Around 2Gb of Memory), especially if you are running multiple analyzers
Network
Ensure your data-syncer pod / container has network connectivity to hosted feed service by exec’ing into the container and then:
If you have a network proxy deployed, you might need to configure your feed service to utilize it:
Ensure your policy pod / container has network connectivity to your local data-syncer pod / container
Run e.g. curl http://anchore-data-syncer:8448/v2/datasets/vulnerability_db/5/latest returns success to confirm connectivity.
Operational Checks and Verification
Feed list show up empty: Check if your feed syncs are happening, there should be data_syncer events in the event log. You should see successful events in the event log. In case there are failures click on the event log and see the cause for failure.
Data-syncer is reporting errors fetching new datasets: Check the Anchore Data Service Status Page. If the service is reporting up and running then check your firewall settings. If the service is reporting any failures please wait for the service to recover.
I see a lot of 404’s in the data-syncer and policy engine logs as soon as the services start: This is normal, the data-syncer takes a few minutes after startup to successfully sync down the configured datasets from the Anchore Data Service. The Policy Engine Service starts asking for the latest vulnerability dataset as soon as it starts up, it takes a few minutes for the system to reconcile. (This is only true for new greenfield deployments)
My first analyzer scan takes longer than the rest: First analyzer scan can take up to 5 minutes, this is just due to the analyzer waiting for the data-syncer to sync down a ClamAV database. Subsequent scans will be no incur this penalty.
10.5 - Verifying Service Health
You can verify which services have registered themselves successfully, along with their status, by running: anchorectl system status
# anchorectl system status
✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE │ HOST ID │ URL │ UP │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer │ anchore-quickstart │ http://analyzer:8228 │ true │ available │ 5180 │ 5.18.0 │
│ policy_engine │ anchore-quickstart │ http://policy-engine:8228 │ true │ available │ 5180 │ 5.18.0 │
│ apiext │ anchore-quickstart │ http://api:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports │ anchore-quickstart │ http://reports:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports_worker │ anchore-quickstart │ http://reports-worker:8228 │ true │ available │ 5180 │ 5.18.0 │
│ data_syncer │ anchore-quickstart │ http://data-syncer:8228 │ true │ available | 5180 │ 5.18.0 │
│ simplequeue │ anchore-quickstart │ http://queue:8228 │ true │ available │ 5180 │ 5.18.0 │
│ notifications │ anchore-quickstart │ http://notifications:8228 │ true │ available │ 5180 │ 5.18.0 │
│ catalog │ anchore-quickstart │ http://catalog:8228 │ true │ available │ 5180 │ 5.18.0 │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘
Note: If specific services are down, you can investigate the logs for the services. For more information, see Viewing Logs.
The -vvv and –json options
Passing the -vvv option to AnchoreCTL can often help narrow down particular issues by displaying the client configuration and client functions as they are running:
# Example system status with -vvv
# anchorectl -vvv system status
[0000] INFO anchorectl version: 5.18.0
[0000] DEBUG application config:
url: http://localhost:8228
username: admin
password: '******'
...
[0000] DEBUG command config:
format: text
[0000] DEBUG checking if new version of anchorectl is available
[0000] TRACE worker stopped component=eventloop
[0000] TRACE bus stopped component=eventloop
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE │ HOST ID │ URL │ UP │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer │ anchore-quickstart │ http://analyzer:8228 │ true │ available │ 5180 │ 5.18.0 │
│ policy_engine │ anchore-quickstart │ http://policy-engine:8228 │ true │ available │ 5180 │ 5.18.0 │
│ apiext │ anchore-quickstart │ http://api:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports │ anchore-quickstart │ http://reports:8228 │ true │ available │ 5180 │ 5.18.0 │
│ reports_worker │ anchore-quickstart │ http://reports-worker:8228 │ true │ available │ 5180 │ 5.18.0 │
│ data_syncer │ anchore-quickstart │ http://data-syncer:8228 │ true │ available | 5180 │ 5.18.0 │
│ simplequeue │ anchore-quickstart │ http://queue:8228 │ true │ available │ 5180 │ 5.18.0 │
│ notifications │ anchore-quickstart │ http://notifications:8228 │ true │ available │ 5180 │ 5.18.0 │
│ catalog │ anchore-quickstart │ http://catalog:8228 │ true │ available │ 5180 │ 5.18.0 │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘
Passing the --json option to AnchoreCTL commands will output the API response data in JSON, which often contains much more information than what the CLI outputs by default for both regular successful operations, and for operations that are resulting in an error:
# anchorectl -o json system status
✔ Status system
{
"serviceStates": [
{
"baseUrl": "http://reports_worker:8228",
"hostid": "anchore-quickstart",
"serviceDetail": {
...
...
11 - API
APIs Overview
Anchore Enterprise is an API-first system. All functions available in the UI and AnchoreCTL are constructed from the same
APIs directly available to users. The APIs are a combination of a OpenAPI-specified REST-like API and a reporting-specific GraphQL API.
REST API
The REST API is the primary API for interacting with Anchore and has the most functionality. The Anchore V2 API is viewable in the following ways:
Retrieve your local spec with curl http://{servername:port}/v2/openapi.json
GraphQL API
The GraphQL API is intended for reporting functions and aggregating data from all resources in an Anchore account and does not provide the same functionality as the REST API. The data
that the GraphQL API operates on is updated differently than the data in the REST API and thus may have an update lag between
when changes are visible via the REST API and when that data flows into functionality covered by the GraphQL API.
Both the REST and GraphQL APIs are exposed on a network and should be protected at the channel level using TLS. Regardless of the authentication scheme,
transport security ensures resistance to replay attacks and other forms of request and credential abuse and should always be used.
See Configuring TLS for setting up TLS in Anchore services directly, or use TLS
termination via load balancers or service meshes such as Istio and LinkerD. The
right choice for your deployment will depend on your specific environment and requirements.
Anchore APIs support three authentication methods:
HTTP Basic
Use HTTP ‘Authorization’ header: Authorization: Basic <base64_encode(<username> + ':' + <password>)> along with your native account credentials
Both the REST and GraphQL APIs implement authorization with Role-Based Access Control (RBAC). The APIs also supports Cross-Account access.
In this example, we can query for images in an account named ‘product1’ instead of the account that my user resides in.
curl -X GET -u {username:password} -H "x-anchore-account: product1" "http://{servername:port}/v2/images"
11.1 - REST Anchore API
Reference for the Anchore API V2
11.2 - GraphQL Reports API Access
Anchore Enterprise Reports provides a GraphQL API for direct interaction with the service.
GraphQL is a query language for APIs and a runtime for fulfilling those queries.
The main Anchore REST API includes operations for retrieving scheduled report results as static sets to make retrieval
of saved results simpler. It is available in the V2 API. The GraphQL schema and types are documented at https://graphql.org/learn/schema/.
Get started
There are different ways of interacting with the Anchore Enterprise Reports GraphQL API. The following sections highlight
two different options for exploring the Anchore Enterprise Reports GraphQL schema with a few examples.
The endpoint you use for interacting with the GraphQL API is the same host and port as the main V2 API. The path is:
/v2/reports/.
Graphical User Interface
One of the ways of exploring and testing a GraphQL schema is by using a web based interface - GraphiQL.
GraphiQL is built-in to the API service and enabled by default in Anchore Enterprise.
To access it in a running Anchore Enterprise deployment, open one of the following urls in a browser:
You will be prompted to enter your Anchore Enterprise credentials.
Working with queries in GraphiQL
Click the show Documentation Explorer button (book icon) in the top left of the GraphiQL window to view the self-describing schema. There are only two root types exposed currently, Query and Mutation.
Query type can be used to obtain image vulnerability data on request.
Mutation type can be used to create new as well as execute and manage existing scheduled queries.
Expand the Query type by clicking on the hyperlink to expose all sub-types. Notice the schema example for runtimeInventoryImagesByVulnerability in the schema docs:
The documentation snippet above comprises of (1) a query type name then (2) arguments for the maximum number of responses to be returned within each page, and/or (3) the nextToken supplied from a previous response if this number is exceeded. See the pagination section below for more information. Additionally (4) the filter which must be correctly defined for the query to be successful. Some basic arguments may be omitted for simple test queries, however the root query type structure shown in the documentation itself must always be surrounded by curly braces {..} when used in the editor.
When a query structure is formatted correctly it will be colourised to indicate that it matches the schema. Ensure that limit example is defined as an integer number if used, omit nextToken string if not required. Check the right hand panel for any syntax errors.
Now click on the hyperlink in the schema docs for the filter RuntimeInventoryImagesByVulnerabilityFilter. This filter schema defines three additional filters which can each be used individually within the previous structure.
The above example shows one RuntimeInventoryImagesByVulnerabilityFilter artifact filter field called ’name’ which searches for any artifacts with ‘gzip’. More than one field can be used in each filter, and more than one filter field can be defined in multiple filters.
After closing the parentheses (which hold the query type arguments) a response section is added within curly braces to define the results which will be returned. This approach allows different response results to be freely defined according to the query response schema linked to from the top level schema doc RuntimeInventoryImagesByVulnerabilityResponse:
Notice that the query response structure also allows both ‘images’ and ‘artifacts’ fields to be defined using their own individual arguments, click on the named hyperlinks to traverse the structure for examples. When combined together the type definition for both the query and response types create a full query.
In summary the GraphiQL GUI interface is handy for exploring and constructing queries supported by the backing API. On the left you can explore the API query docs specific to Anchore, in the middle you can construct and execute your query and on the right side you can see your query results.
Happy querying!
Command-Line Interface
You can also use curl to send HTTP requests to Anchore Enterprise Reports API. To view the schema
$ curl -u <username:password> -X POST "http://<servername:port>/v2/reports/graphql?query=%7B__schema%7BqueryType%7Bname%20description%20fields%7Bname%20description%20args%7Bname%20description%20type%7Bname%20kind%7D%7D%7D%7D%7D%7D%0A"
You can use the API programmatically within the command line and other custom scripts/programs.
Query Options
API Key
If you are interacting directly with the API, either via a command line tool, custom scripts/programs or the GraphiQL GUI interface;
You can generate and utilize API keys to avoid using private credentials. See Generating API keys for details.
API keys, work the same way as your regular credentials for both command line and GUI queries. The username for API keys is static as _api_key and the password is the value of the generated key string.
Visit your GraphiQL endpoint in your web browser. Use ‘_api_key’ and ‘’ when prompted for a username and password.
Pagination
Depending on the size of the data set (i.e. number of tags, images etc in the system) the results of a query could be very
large. The reports service implements pagination for all queries to better handle the volume of the results.
All response types contain a metadata object called pageInfo. It is optional, but recommended to add to all
queries:
A non-null nextToken indicates that results are paginated. To get the next page of results, fire the same query along
with the nextToken from the last response as a query parameter:
List vulnerabilities of a specific severity. And include all the images, currently or historically mapped to a tag,
affected by each vulnerability
Use the query’s filter argument for specifying the conditionality. Reports service defaults to the “current”
image-tag mapping for computing results. To compute results across all image-tag mappings - current and historic, set the tag filter’s currentOnly attribute to false. Query for vulnerabilities of Critical severity:
To get more details such as tag mappings for the image, add the relevant attributes from the schema to the body of the query.
List vulnerabilities detected in the last x hours. And include all the images, currently or historically mapped to a tag, affected by each vulnerability
Use vulnerability filter’s after and before attributes for specifying a time window using UTC timestamps. Query for vulnerabilities detected after/since August 1st 2019:
Given a vulnerability ID, list all the artifacts affected by that vulnerability. And include all the images, currently or historically mapped to a tag, containing the said artifact
Use vulnerability filter’s id attribute for specifying a vulnerability identifier. Query for vulnerability ID CVE-2019-15213:
Scheduled reports can be created via the Anchore Enterprise reporting Web UI or using the GraphQL API and mutation type.
Once you have created a scheduled report, the following example below will run you through how you can retrieve and use this report data.
Please note:
Large queries will run MUCH faster when created as a scheduled report and run. vs manually paginating through the API.
When creating a scheduled report directly via the GraphQL API it will NOT show up in the UI.
To retrieve reports with global scope you will need to use /v2/reports/global/graphql route mentioned above.
Retrieve a list of UUIDs for all scheduled report executions. Use the selected UUIDs to retrieve the report content from the API:
If upgrading from a release in the range of v5.0.0 - v5.17.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
Anchore SBOM - SBOM Management
Anchore Enterprise now provides the ability to upload and manage your company’s SBOMs.
The feature provides the ability to view package contents and vulnerabilities found in the SBOMs uploaded to Anchore Enterprise.
New Prometheus metrics are available to monitor the SBOM Management feature.
The Imported SBOM count is included in your Total SBOM Usage available in the UI.
For more detail on this feature, please see the SBOM Management documentation.
Adds a new database index for reports_tags table to improve performance of queries that filter by account name and image digest.
Adds support to detect chrome binaries.
Improved the performance of the Tag Drift Policy Gate.
Exposes the max_scan_time configuration option in the API to allow users to change the value within the UI.
This value is the maximum time in milliseconds that a ClamAV Malware Scan is allowed to run.
License
When license content is found but the license id can not be determined, the license value will be listed as
other-indeterminate and the license content will be included in the license data returned in /v2/images/{image_digest}/content/licenses.
Identification of old analysis data
In a future release of Anchore Enterprise, analysis data generated prior to the 4.0 release will no longer be supported. If these images are still
important to your organization, we highly recommend that you force reanalyze them to ensure that you have the most current analysis data for them.
Many improvements have been made to our scanning and analysis capabilities including improvements to package and vulnerability detection, license identification, and more.
To assist in identifying older artifacts in your system, a warning message for each artifact analyzed before the 4.0 release will be printed
during the upgrade job. It will include the account name, image pull string and image digest. This will allow you to identify which images need to be force reanalyzed.
Various supporting libraries have been updated in order to improve security.
Fixes
Resolves an issue where an image that has a change in parent digest was not correctly reflected in reports.
Addresses an issue in Syft which resulted in our inability to determine a dpkg license with the data provided during analysis.
Addresses an issue in Syft which resulted in the license content showing up in the licenses field instead of just the license id.
Addresses an issue in Syft where the Dotnet deps cataloger would hang while resolving dependencies.
Fixes an issue seen when you have linux-kernel entries in your image and Enterprise was surfacing these entries as packages in
the os type as well as linus-kernel type. The result caused any vulnerability matches to be duplicated for that image.
Fixes an issue where the Dotnet cataloger within Syft could result in different number of packages when run on the same image multiple times.
Fixes a failure with Strict Configuration Validation when enabling OSAA Migration.
Resolved an issue where a warning message regarding unused environment variables was being printed 3 times during startup.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The webhook system managed in the configuration file is being deprecated in favor of the more advanced notification system which can be configured to send notifications to webhook endpoints. Please see Notifications for more information on configuring notifications.
UI Updates
Improvements
SBOM Management
Import and process SBOMs generated by any tool adhering to the SPDX or CycloneDX standards via the new Imported SBOMs tab. Establish a comprehensive inventory of software components and dependencies, regardless of origin.
View packages within uploaded SBOMs, including their associated licenses.
Automatically identify and report vulnerabilities within uploaded SBOMs, and export detailed vulnerability data in CSV format. Use the new Anchore Score - a composite metric combining CVSS score and severity, EPSS percentage, and CISA KEV data - to prioritize and triage vulnerabilities effectively, significantly reducing noise and accelerating triage time.
Our table columns now clearly indicate whether they’re sortable, and when sorting is applied, they show the direction - ascending or descending - at a glance.
For account administrators, a new ‘Groups’ column has been added within System > Accounts > Users which lists all use groups with roles for the user’s primary account.
A new Show OS CVEs filter has been added to Artifact Analysis > Vulnerabilities. Combined with the existing Show Non-OS CVEs toggle, this new filter allows users to either exclusively display OS CVEs, hide them, or show everything.
Page headers across the application have been refreshed for consistency and our lovely robots have been relocated to the sidebar. Primary actions are also highlighted in the top-right of the page header.
The Add / Edit User Group modal under System > User Groups now allows you to associate system-wide roles with a group.
For clarity, the Email field for an Account has been updated to be Contact Email instead.
Fixes
When setting a system limit via deployment configuration, reaching the limit had the UI incorrectly stating that the limit was being approached instead. This has been fixed.
Previously, it was possible to add invalid regular expressions as rule parameters for rules that required them (such as the filename regex field under the secret scans gate). Validation is now enforced, and invalid expressions can no longer be added through the UI.
Previously, a user had to click the table column header text to sort the column. Now, a user can click anywhere within the table column header cell to trigger a sort.
In previous versions, attempting to analyze a repository that already exists via the Analyze a Repository modal in the Image Selection view could cause a page exception after entering the repository details. This issue has now been resolved.
If upgrading from a release in the range of v5.0.0 - v5.16.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Fixes
Resolves an issue in the object store driver migration code path that prevented data from being successfully transferred from the old data store to the new one.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The webhook system managed in the configuration file is being deprecated in favor of the more advanced notification system which can be configured to send notifications to webhook endpoints. Please see Notifications for more information on configuring notifications.
If upgrading from a release in the range of v5.0.0 - v5.16.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
Memory Usage Improvements
We’ve made targeted improvements to the memory usage profile of various services that reduces the amount of memory
the services use in most circumstances.
Image Analysis
When adding an image to Anchore Enterprise from a multi-platform manifest list, the Linux operating system digest
will be preferred over a Windows digest. This change ensures that the most commonly used platform is prioritized for analysis and scanning.
Prometheus Metrics
Now available on a per-account basis. This allows for more granular monitoring and alerting based on account-specific metrics.
Per-account metrics are enabled in the helm chart by setting services.catalog.account_prometheus_metrics to true.
License
A new endpoint that provides detailed information about the software licenses for each package contained within an image.
GET /v2/images/{image_digest}/content/licenses
Feeds
GET /v2/system/feeds endpoint has been updated to include two new timestamps that should improve the clarity of when
vulnerability data was downloaded and built in the Anchore Data Service data_service_built_at and when it was
received by your enterprise deployment enterprise_received_at. All other timestamps returned by that endpoint
continue to be updated but have been marked deprecated.
Corrections
Now have templating support for the package URL (PURL) field.
Logging
Anchore Enterprise will print a warning level log message when any ANCHORE_* Environment Variables that are detected
without a reference in the config file. This is an indication of a potential misconfiguration. The log message will start with the words Detected Anchore environment variables which are not referenced in the configuration file: {'ANCHORE_....
Fixes
Fixes an issue where the Vulnerability Fix Observed At Date was not being captured.
When the root owning package node is a nix package, any owned packages are no longer filtered. This is due to the fact that there is
currently no distro-level vulnerability data ingested for nix. The only method of getting a possible vulnerability match will be via the
descendant packages (python, npm, go, etc).
When first configuring SSO on your Anchore Enterprise deployment, if you allowed the default account to be automatically
created when the first user logged in, the account would not have received the default policy. This has been fixed.
Fixes an exception seen when requesting a forced feed sync via POST /v2/system/feeds?force_sync=true.
Fixes the error message returned if the user provided an invalid policy gates.
When the image is “force reanalyzed”, the analysis will re-evaluate the parent digest.
Fixes an issue where the license policy gate was not working properly for non-os packages.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The webhook system managed in the configuration file is being deprecated in favor of the more advanced notification system which can be configured to send notifications to webhook endpoints. Please see Notifications for more information on configuring notifications.
UI Updates
Improvements
Sticky headers are now enabled on select tables, so you can keep
column names in view while scrolling. Try it out in the Images,
Events, and System > Accounts views.
The sidebar navigation menu now includes tooltips when collapsed to
help identify the icons and their associated views.
The Redis connection string now supports the rediss:// protocol,
allowing TLS connections to resources that use a certificate authority.
The SBOM > Malware tab in Artifact Analysis will now show whether
Malware Scanning is active on your Anchore instance or if there are
no findings from the scan. It has also been pinned in the top list.
Loading the list of LDAP mappings on the System page has been optimized
to improve performance.
The generic error message displayed when an artifact analysis fails
has been replaced in favor of more informative service-level messaging
to aid in troubleshooting.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
Fixes
In previous versions, the tour step information associated with
adding a repository and adding a tag was inverted. Now fixed.
Previously, nix and alpm packages were not displaying correctly
within the Artifact Analysis > Vulnerabilities view. This has
been fixed.
In the Dashboard detail view, the labels at the top-right of the
page could be occluded by the robot image. Now fixed.
Fixed an issue where new users were shown the welcome banner on both
their first and second logins.
If upgrading from a release in the range of v5.0.0 - v5.15.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
Policy
The files Gate with suid or guid set Trigger now provides a new parameter ignore dir to allow users to indicate if directories should be ignored when checking for setuid/setgid. This parameter is optional and defaults to false.
The package Gate with denylist Trigger now provides a new parameter that allow version comparison operations. The default behavior is still an exact match.
Package License Names
For newly analyzed images, the license names are now normalized to the SPDX License List. This will help with
consistency in the UI and API responses. When an exact match is not found, we will continue to use the value found
within the image and extracted by Syft. For more information on normalized license names, please
review SPDX License List.
Please Note: if you are currently using the license field in your policy gates, you may need to update
your policy to reflect the new normalized license names. For example, if you are using GPL-2.0 in your policy,
you will need to update it to GPL-2.0-only to match the SPDX License List.
Image Hints
When using image hints, the result application of the hints will be visible in a downloaded SBOM in Syft Native, SPDX, and CycloneDX formats. This will allow users to see the hints that were applied to the image.
This will apply to only newly analyzed images. If you would like to see hints applied to an existing image, you will need to reanalyze the image.
Fixes
Fixes a URL encoding issue found in some notifications when the account name has a space in it.
Centralized Analysis now supports images that have been compressed using zstd compression.
RBAC Roles are now correctly reflecting the allowed permissions. During a review, it was found
that the read-only, read-write, and image-developer roles had included listFeeds, updateFeeds,
listServices and getService permissions that were not correct. These permissions were only allowed to users
with system-admin role. This is documentation only, no change in user behavior is expected.
Provides a better error message when creating a new user and the name conflicts with an existing User Group name.
Prevents race conditions that could occur when adding the same image multiple times and also when deleting the same image. This could result in the image analysis failing.
Improves the analyzer queue by implementing a Round Robin algorithm to ensure that each account is serviced equally.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The webhook system managed in the configuration file is being deprecated in favor of the more advanced notification system which can be configured to send notifications to webhook endpoints. Please see Notifications for more information on configuring notifications.
UI Updates
Improvements
All links to documentation within the application have been updated to use the version your system is using for accuracy, not just the latest version.
Administrators can now configure a custom message on the login screen with a character limit of 10,0000 characters. The title also now supports a limit of 250 characters.
The About modal now includes the commit SHA and build timestamp of the Enterprise Client and Service. This information is useful for troubleshooting and support purposes.
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
Fixes
When a small screen height was used, the login content could display over the top navigation bar. This has been fixed.
If upgrading from a release in the range of v5.0.0 - v5.14.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Fixes
Update Anchore Enterprise with the latest version of AnchoreCTL v5.15.1
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The webhook system managed in the configuration file is being deprecated in favor of the more advanced notification system which can be configured to send notifications to webhook endpoints. Please see Notifications for more information on configuring notifications.
If upgrading from a release in the range of v5.0.0 - v5.14.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
API
Improved the ImageContent Object description.
GET /v2/version now provides the commit SHA and the build datetime for the Enterprise Image.
Various package updates to improve security and performance.
Fixes
Fixes an issue determining if a policy_eval event should be issued because the policy eval result has changed. For customers who have alerts enabled, this may have resulted in multiple events being generated in error.
Fixes an issue during analysis which causes a cache miss to occur in the image layer cache. The cache miss would result in reduced performance. Resolving this issue will result in improve analysis performance.
Resolves an issue parsing environment variables with unexpected newline characters. This issue prevents services from starting.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The webhook system managed in the configuration file is being deprecated in favor of the more advanced notification system which can be configured to send notifications to webhook endpoints. Please see Notifications for more information on configuring notifications.
UI Updates
Fixes
When a trailing slash was manually included in the URL for the Images tab, an issue was observed. This has been fixed.
Column headers within our tables now have a dividing line between them for better visibility and to help resizing.
When an error occurred while generating a report due to exceeding a configured limit, the message returned was generic and not helpful. Additional detail has now been added.
When a SAML user has groups conferred by an IDP, those groups show within the Edit User modal and appear to be removable. As the group will continue to persist even after removal as the IDP asserts it, the user experience has been improved to prevent removal with an explanation as to why.
The graphs within the Artifact Analysis view now correctly repaint on changing the theme from dark to light mode or vice versa.
When navigating directly to a tab url as a user who does not have permission to view it, the tab tour would still get triggered. This is no longer the case.
When the window height is made very small, the Log Out button was overlapping with the navigation tabs. This has been fixed.
The dark/light mode preference is now preserved across browser tabs. This means that if you switch to dark mode in one browser tab, that change is immediately reflected in any other open browser tab (within the same browser).
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
If upgrading from a release in the range of v5.0.0 - v5.13.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Fixes
Fixes an issue during analysis which causes a cache miss to occur in the image layer cache. The cache miss would result in reduced performance. Resolving this issue will result in improve analysis performance.
Resolve an issue parsing environment variables with unexpected newline characters that would prevent services from starting.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
If upgrading from a release in the range of v5.0.0 - v5.13.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
System Configuration
The Anchore Enterprise API has new endpoints to view system configuration and dynamically change a few configuration values.
GET /v2/system/configurations
PATCH /v2/system/configurations
GET /v2/system/configurations/{config_key}
PUT /v2/system/configurations/{config_key}
DELETE /v2/system/configurations/{config_key}
Restores the configuration value to the default value.
The following system configuration values are now configurable via the API:
When using the max_days_since_fix within the VulnerabilityGate and PackageTrigger, the findings will now provide the following data:
fixed in - the version which the fix was applied.
max_days_since_creation - the number of days since the finding was created.
vuln_detected - the date the vulnerability was detected.
fix_released - the date the fix was released.
max_days_since_fix - the number of days since the fix was applied per your policy trigger.
Reports
The following reports now include the field Artifact Vulnerable From which is the date when Anchore’s Reporting Service first detected the vulnerability on the artifact:
Runtime Inventory Images by Vulnerability
Tags by Vulnerability
Artificts by Vulnerability
Logging
Structured log output now provides the service name and service version.
Memory Usage
If your deployment is configured to use the Object Store Database Driver, the memory usage profile of the Catalog Service will be reduced.
Fixes
Policy-engine service gracefully handles errors when the catalog service no longer can access images referenced by ancestors.
Policy Gate packages with Trigger required_package now correctly allows the version match type to detect a minimum package version.
Policy Gate packages with Trigger required_package now correctly handles some java packages that do not have a proper version string. When the version comparison fails, the policy will now trigger a finding.
The Data-syncer Service now correctly removes older versions of GrypeDB from the Object Store.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
UI Updates
Improvements
Admins can now verify and manage system settings with our new Configuration view within the System tab. Editable configuration options are displayed by default and read-only items are searchable and accessible for viewing via a toggle. The options currently available for editing include global / service-level log levels and enabling the ClamAV Malware Scanner.
The following report templates now include the Artifact Vulnerable From field by default which is the date when Anchore’s Reporting Service first detected the vulnerability on the artifact:
Runtime Inventory Images by Vulnerability
Tags by Vulnerability
Artifacts by Vulnerability
Fixes
Within the Kubernetes tab, search text could sometimes lag behind what a user was typing as the table updated dynamically. Now, searching is seamless during updates and intermediate network requests are canceled.
Previously, when an admin wanted to update their LDAP configuration, the password field was required even if the password was not being updated. This is no longer the case.
Feed errors within the System > Health view are now handled gracefully and displayed within their section rather than obfuscating the entire page.
When a user logged into an account context containing special characters after a system restart, the user would be automatically redirected to their default account. This has been fixed.
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
If upgrading from a release in the range of v5.0.0 - v5.12.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Fixes
Fixes a potential deadlock that was seen when large deployments (32 services or more) booted up. This manifested as the services being unable to log messages and would not fully come to an active state.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
12.1.10 - Anchore Enterprise Release Notes - Version 5.13.0
Anchore Enterprise v5.13.0
Warning for Large Deployments
This release contains a potential deadlock seen when booting the services on initial install or after an upgrade. Customers with large deployments (32 or more services) should consider upgrading directly to v5.13.1 to avoid any possible issues.
If upgrading from a release in the range of v5.0.0 - v5.12.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
Malware Scanning is now available on images larger than 4 GB.
For images larger than 4 GB, Enterprise will split images into individual files of 2 GB or smaller.
Any files within the image that are greater than 2 GB will be skipped during analysis. Any skipped file will be identified with a Malware Signature as ANCHORE.FILE_SKIPPED.MAX_FILE_SIZE_EXCEEDED.
When performing Malware Scanning on these larger images, please expect an increase in your analysis time.
A new configuration option malware.clamav.max_scan_time has been added to the analyzer_config.yaml. This will allow for the configuration of the maximum time allowed for a single scan. The default value is 30 minutes.
The Malware Policy Gate with the Scan Findings Trigger will ignore the new ANCHORE.FILE.SKIPPED.SIZE_EXCEEDED findings as they do not represent positively identified malware. Instead, these findings can be identified using the Scan Not Run trigger by enabling the fire_on_skipped_files parameter.
Fixes
The data-syncer service now correctly frees memory and disk space after processing each dataset.
Addresses an issue where Vulnerability Fix field’s value can change when a RHEL image that contains perl is re-analyzed.
Fixes an error that occurs when an analyzer service fails to parse the clamav db metadata.
Corrects two issues with the config parsing, which is completed at startup, causing an error seen in the catalog or policy-engine Service.
The first issue was when the root level webhooks was not present.
The second issue was when the services.policy_engine.vulnerabilities.matching.exclude was not present.
Fixes an analysis race condition that could cause two analyzer services to attempt to analyze the same image at the same time. This would lead to the image analysis failing and would require a manual request for a force reanalysis.
Images that are imported from AnchoreCTL now correctly benefit from the complete list of supported package types.
Fixes a condition where a large number of system events could cause the notification service to fail to forward the notifications.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
UI Updates
Improvements
The status of Kubernetes inventory agents are now displayed within the System > Health view. This allows administrators to quickly identify that all agents are reporting in as expected.
The Image Selection view now includes the ability to remove repositories without any images from the system.
Fixes
A regression was introduced in the previous release where the route was preserved upon logout. This has now been fixed.
The name field in the Add a New Registry Credential became required because of a code regression. It is now optional again.
Fix for a scenario whereby a user without any pre-existing tour-state properties would not have them assigned on login. Now addressed.
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
12.1.11 - Anchore Enterprise Release Notes - Version 5.12.0
Anchore Enterprise v5.12.0
Warning for Large Deployments
This release introduced a potential deadlock seen when booting the services on initial install or after an upgrade. Customers with large deployments (32 or more services) should consider upgrading directly to v5.13.1 to avoid any possible issues.
If upgrading from a release in the range of v5.0.0 - v5.11.x
The upgrade will result in an automatic schema change that will require database downtime. Below are the estimated downtime durations for version that require significant downtime:
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.x schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
The Exploit Prediction Scoring System (EPSS) is now included as an additional dataset from the Anchore Data Service. It is automatically
downloaded by the Data-Syncer Service.
This dataset is used in the Vulnerabilities Policy Gate and Package Trigger with optional parameters:
EPSS Score Comparison
EPSS Score
EPSS Percentile Comparison
EPSS Percentile
RBAC
New RBAC role called image-delete has been added. This role allows users to delete images, sources and archives from the system.
Removed additional authorization checks for adding the special annotation anchore.user/marked_base_image to an image.
API
New endpoint which returns the currently enabled resource-limits (if any) and the current usage of those limits. GET /v2/system/resource-limits
Metric
New metrics have been added to provide more data around the database pool
anchore_db_pool_size - Max Number of connections in the pool
anchore_db_pool_available - Number of connections available for use in the pool
anchore_db_pool_in_use - Number of connections currently in use
SBOM
Enterprise will no longer surface packages with unknown versions. This will reduce the number of false positives seen during analysis.
Logging
When structured logging is enabled, the output on disk will include the json output as well as the normal text format which is easier to read.
Fixes
Improves error handling during image analysis that could have caused unnecessary analysis failures.
Fixes the permission when deleting a source artifact from the system. Only users with system-admin, full-control, read-write, or image-delete roles can delete sources.
Improves handling of alpine patch versions during vulnerability matching. For more information please see issue.
Fixes an upgrade failure, seen during an upgrading to a v5.11.x release, when parent_digest is Null within the reports_images database table.
Fixes a policy eval failure that is seen when multiple evaluations on the same image are running concurrently.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
UI Updates
Improvements
The authenticated interface has been updated with a new vertical
navigation bar that offers quick access to various views within the
application. The navigation bar is collapsible and responsive,
enhancing the user experience by providing a streamlined interface.
Additionally, the open or collapsed state of the navigation bar is
now persisted across sessions. This new navigation bar lays the
groundwork for future global controls and usability enhancements.
The application now uses the full width of the screen, offering more
space for content. The font size and visual elements dynamically
adjust to the viewport size, ensuring a consistent user experience
across various screen widths and resolutions.
The image-delete role has been added to the RBAC system. This role allows
users to delete images, sources, and archives from the system and is now
provided amongst the other RBAC settings in the user and group management
controls under System.
The EPSS service is now available as a datasource for use by policy gates and
triggers in the Policy Manager. This service provides a score and
percentile for each vulnerability based on the likelihood of exploitation. The
EPSS score and percentile can be used as parameters in the Vulnerabilities
policy gate, and Package trigger. The availability and health of this
service is displayed alongside the other service details in the
System > Health view.
Fixes
The API Keys breadcrumb no longer includes the account name and now
displays only the username. Since API keys are not tied to a
specific account and user permissions may allow switching between
accounts, this change helps eliminate ambiguity.
The page displayed when a license has expired or is invalid now
contains links to the Anchore Support
page instead of an email address.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
12.1.12 - Anchore Enterprise Release Notes - Version 5.11.1
Anchore Enterprise v5.11.1
Note
Two customers experienced an upgrade failure to the v5.11.x release. The failure occurred when a parent_digest field is set to Null within the reports_images database table. This condition has been properly handled in the v5.12.0 database schema changes. Please consider upgrading directly to v5.12.0 to avoid any possible issues.
If upgrading from a release in the range of v5.0.0 - v5.10.0
The upgrade will result in an automatic schema change that will require database downtime.
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.1 schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Fixes
Addresses a communication failure between the Anchore Enterprise services seen only when your deployment is configured to use internal SSL.
internalServicesSSL.enable: true
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
Package Feeds and Policy Gates for Ruby Gems and NPMs, are now EOL. Please contact Anchore Support for more information.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
Feed Service: The Feed Service has been deprecated and replaced by the Data Syncer service. The Feed Service is no longer supported in Enterprise installations.
Package Feeds: The Ruby Gems and NPMs package feeds and policy gates have been declared End Of life and are no longer supported.
12.1.13 - Anchore Enterprise Release Notes - Version 5.11.0
Anchore Enterprise v5.11.0
Note
Two customers experienced an upgrade failure to the v5.11.x release. The failure occurred when a parent_digest field is set to Null within the reports_images database table. This condition has been properly handled in the v5.12.0 database schema changes. Please consider upgrading directly to v5.12.0 to avoid any possible issues.
If upgrading from a release in the range of v5.0.0 - v5.10.0
The upgrade will result in an automatic schema change that will require database downtime.
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.11.0 schema change will take approximately 1-2 minutes to complete for every 1 million vulnerable artifacts in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
RBAC
New ability to assign administrative privileges to users who are not members of the admin account. This role may be granted either directly be another admin user or via a User Group membership.
RBAC Role name: system-admin
RBAC Domain Name: *
API
New endpoint GET /v2/accounts/users returns a list of all users in the system, including their roles and the accounts to which they belong. This is only available to admin users.
New endpoint GET /v2/accounts/{account_name}/users-with-roles returns a list of users that have been granted roles in the specified account.
The following endpoints have improved data associated with Users and RBAC Roles. Each user object includes a list of roles that have been granted to the user and an indication of how the role has been granted.
GET /v2/user
GET /v2/accounts/users
GET /v2/accounts/{account_name}/users
GET /v2/accounts/{account_name}/users-with-roles
Improved the response time of endpoints that return a list of users.
Improved the response time of GET /v2/system/user-groups
The endpoint GET /v2/system/statistics now includes the following new metrics:
report_creation - The number of reports that have been created.
report_inventory - The number of generated reports currently in the system.
Configuration
Added log messages which warn the user when an incorrect configuration value is detected.
Integration Health Status
When using the k8s-inventory agent release v1.7.0, the agent will automatically register itself with the Anchore Enterprise. It will then send periodic health status updates so you can validate the health of your k8s-inventory agents directly from Enterprise.
The API has new endpoints to view the health status of the k8s-inventory agent.
GET /v2/integrations/k8s-inventory/health
GET /v2/integrations/k8s-inventory/health/{agent_id}
New AnchoreCTL commands are available to view integration health.
Improves database space usage for the following reports by reorganizing the data into new tables:
Vulnerabilities by ECS Container
Vulnerabilities by Kubernetes Container
Vulnerabilities by Kubernetes Namespace
Once the upgrade is complete and you are comfortable with the resulting reports, you may wish to truncate the legacy tables and reduce database space usage.
Policy
Add support for the value parameter when the check parameter is exists or not exists. Previously the value parameter would be ignored for these check types.
SBOM Improvements
Utilizes a new JVM cataloger which improves the identification of java installs which occur outside of an OS package manager. This also normalizes version comparison logic for earlier java versions which did not use semantic versioning which should lead to more accurate vulnerability matching.
Adds vulnerability matching support for Azure Linux 3
Adds support for identifying OCaml packages
Adds binary classifiers for the following:
curl
dart
haskell
ghttp
proftpd
zstd
xz
gzip
jq
sqlcipher
Fixes
Fixes an issue where some java-archive artifact had a blank Name or Version field within the Syft SBOM.
Fixes an issue where GET /v2/accounts/{account_name}/users/{username} endpoint failed to return all the user’s roles when some had been granted via a User Group membership.
Returns a more specify error code and response to GET /v2/images/{image_digest}/check when specifying an invalid policy_id.
Policy Creation Metric now correctly increments when a policy is created via the API. This policy_creation metric can be seen in the GET /v2/system/statistice endpoint.
Minor fixes to the debug level logging within the API Service.
The Ancestry Policy Gate with allowed base image tags Trigger now allows wildcard matching for base image tags.
Fixes a missing event when a report in the pending state has been cancelled.
Improves error handling for GET /v2/images/{image_digest}/check when specifying base_digest=auto.
Fixes an issue with the Dockerfile Policy Gate where we failed to handle multi-line directives.
Using the POST /v2/policies API with an existing policy ID will now fail with a 409 response instead of incorrectly updating the existing policy. Please use PUT /v2/policies/{policy_id} to update policies.
Fixes an issue in the response code of POST /v2/vulnerability-scan.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
Package Feeds and Policy Gates for Ruby Gems and NPMs, are now EOL. Please contact Anchore Support for more information.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
Feed Service: The Feed Service has been deprecated and replaced by the Data Syncer service. The Feed Service is no longer supported in Enterprise installations.
Package Feeds: The Ruby Gems and NPMs package feeds and policy gates have been declared End Of life and are no longer supported.
UI Updates
Improvements
In this release, administrators are identified by the presence of
the system-admin role. This role is automatically assigned to
users in the admin account, but users in other accounts can be
promoted to or demoted from an administrative role through this
assignment. The role can be directly assigned to a user during
account creation or indirectly through group membership. Note that
this role is read-only for users in the admin account.
Markdown markup is now supported in the Recommendation field of
a policy rule. This allows for more detailed explanations to be
provided to users when a policy rule is triggered.
Fixes
Multiple fixes applied to improve the appearance of the UI theme
Because of a mishandled error condition, a non-admin user would be
logged out if they try to access a global report, which can occur if
they click on an associated report link surfaced on the the
In previous versions of the application, column widths in the
Artifact Analysis view would reset to their default values when
the page state changed due to background data updates. This issue
has now been resolved, and column widths will persist even when the
underlying data changes.
The card view is now the default for Feeds Sync details on the
System Health page. However, if a user has previously overridden
this setting, the table view will still be applied. Additionally,
dataset and checksum names are now displayed on the cards. Aesthetic
adjustments have been made to support these changes.
In previous versions of the application, selecting all visible
events while a filter was applied would inadvertently select all
events, not just the visible ones. This issue has now been resolved,
ensuring that only visible events are selected when a filter is
active. Additionally, an issue with string-based filtering—where the
filter failed to correctly match the user-entered string in the
To remain consistent with the outcome of changes made against
individual users, changes made to user groups will now trigger a
log out event for any users associated with any user groups that are
modified or deleted.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
12.1.14 - Anchore Enterprise Release Notes - Version 5.10.0
Anchore Enterprise v5.10.0
Note
The Feed Service has been replaced by a new Enterprise service called the “Data Syncer”. Enterprise no longer supports running a separate feed service.
If upgrading from a release in the range of v5.0.0 - v5.9.0
The upgrade will result in an automatic schema change that will require database downtime.
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.7.0 - v5.9.0 schema change will require minimal database downtime.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
Data Syncer Service: The Feed Service has been replaced by a new Enterprise service called the “Data Syncer”. Enterprise no longer supports running a separate feed service. The Data Syncer Service is responsible for syncing data from the Anchore Data Service to the Enterprise installation. The Data Syncer Service is a core service in the Enterprise installation and is required for the system to function correctly.
A new vulnerability exclusion mechanism has been added to the Policy Engine. This replaces the previous ability to disable specific providers in the on-prem feed service. See Data Syncer Configuration for more information on configuration.
Fixes
Resolves an issue that would prevent images that had no vulnerabilities detected in the past from reporting future vulnerabilities.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
Package Feeds and Policy Gates for Ruby Gems and NPMs, are now EOL. Please contact Anchore Support for more information.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
Feed Service: The Feed Service has been deprecated and replaced by the Data Syncer service. The Feed Service is no longer supported in Enterprise installations.
Package Feeds: The Ruby Gems and NPMs package feeds and policy gates have been declared End Of life and are no longer supported.
UI Updates
Improvements
Data from Anchore Hosted Feeds is now synchronized to your local
enterprise installation via the Data Syncer service, and represented in the
system health view under System.
Fixes
With very large sets of groups and users, the time taken to store an
updated SSO IDP definition could be very long. This issue has now been
addressed.
Bulk selection of events when using a filtered list was including
items outside of the filter context. This issue has now been fixed. In addition, the table-filter control have been updated to permit compound filter strings corresponding to different table columns, and both the table- and advanced-filter
will now match whitespace in the Event Source table field.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
12.1.15 - Anchore Enterprise Release Notes - Version 5.9.0
Anchore Enterprise v5.9.0
Anchore Enterprise release v5.9.0 contains targeted fixes and improvements.
Attention Feed Service
In the future v5.10.0 release, the Feed Service will be obsolete and replaced by a new Enterprise service that will import feed data directly from the new hosted Anchore Data Service. The v5.10.0 release will also provide enhanced support for air-gapped deployments. The goal of this change is to reduce operational burden for our end users and allow for faster response to changes in upstream data providers. More information about this migration will be provided leading up to the release of v5.10.0.
If upgrading from a release in the range of v5.0.0 - v5.8.1
The upgrade will result in an automatic schema change that will require database downtime.
The v5.3.0 schema change may take more than an hour to complete depending on the amount of data in your reporting system.
The v5.6.0 schema change may take 2 hours or more depending on the amount of data in your system.
The v5.7.0 - v5.8.1 schema change will require minimal database downtime.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
Package Types
Enterprise has increased the number of supported package types to be aligned with what is currently supported by Syft. Below is a list of the newly available package types:
ArchLinux alpm (under os)
CocoaPods
Conan
Dart Pub
Erlang/OTP
Gentoo Portage (under os)
GitHub Action Workflows
GitHub Actions
Hackage
Hex (Erlang)
Linux Kernel
Linux Kernel Module
LuaRocks
NixOS packages (under os)
PHP Composer
PHP PECL
R Package
Rust Crate
SWI-Prolog
Swift
WordPress Plugins
Policy
The Default Policy, which is automatically available in newly created accounts, has been renamed Anchore Enterprise - Secure - Default Policy. It has also received some updates to its rule sets.
The CIS Policy is no longer automatically available during new accounts creation.
The anchore_security_only Policy is no longer automatically available during new accounts creation.
Theancestry gate now supports denylisting ancestor images by tag or digest.
API
POST /v2/repositories endpoint now includes a query parameter exclude_existing_tags which when set will exclude tags that are already present in the repository. Only newly created tags will be added to the Enterprise system.
GET /v2/system/statistics API endpoint now includes the following
account_creation
account_inventory
user_creation
user_inventory
report_execution_inventory
image_inventory
source_inventory
GET /v2/summaries/image-tags endpoint now includes an optional flag runtime which when set to true will return only tags that are found in the runtime inventory.
Report Graphql
Support was added to cancel a report execution that is currently running or queued.
Fixes
The SPDX format will now have the correct originator field for JAVA jar packages.
Addresses an issue where Native Users that had active UI sessions continue to be able to access reports after Native Users are disabled.
Improves the error handling when listing policies that have a missing or invalid policy digest.
Fixes debug logging in the authorization path within the API Service.
Fixes an issue where we failed to fetch vulnerabilities for an Alpine image due to improper constraints.
The metadata trigger in the packages gate will now default to an equality (’=’) comparison for the package type, name and version fields. The comparison can be controlled by specifying the type_comparison, name_comparison or version_comparison parameters.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
Package Feeds and Policy Gates for Ruby Gems and NPMs, are now deprecated. Please contact Anchore Support for more information.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The Feed Service is deprecated in v5.8.0. Starting in v5.10.0 a new service will be introduced to synchronize Feed data from a hosted Anchore Data Service.
UI Updates
Improvements
The SBOM tab within the Image and Source Analysis views now supports many more package types such as Conan, Swift, etc.
A new Usage tab for administrators has been added to the System view which displays metrics related to SBOMs analyzed, total number of accounts, and the total number of users in the system. This tab is meant to provide insights into your installation and the value Anchore delivers. Additional detail is available for download as a JSON file.
The Analyze Repository dialog in the Image Selection view now has an option to exclude existing tags from analysis. This is ideal for scanning very large repositories without pulling in unnecessary history.
The Analyze Tag dialog now allows a Dockerfile to be uploaded when you submit a tag or image digest for analysis. The Dockerfile can then be used for policy gates which rely on it rather than the ‘guessed’ one.
The Incomplete Analyses modal within the Image Selection view has been further optimized to improve performance via server-side pagination, filtering, and sorting.
Within the Reports tab, users can now manually stop generating a report that is pending or currently running. For large-scale systems, this can be useful to prevent a report from consuming significant resources.
Within the Reports tab, the Account column is currently included by default for most of our system templates. This field is necessary when viewing global reports (results scoped to multiple accounts). When a new, global report is based on a template that does not include the Account column, the column is now automatically added during the report preview. Similarly, if the local scope is configured instead, the Account column is automatically removed during report preview. The column can still be manually added or removed prior to report creation.
Fixes
Users with the createRepository permission can now analyze a repository even if one or more tags have already been analyzed. Previously, a conflict would occur if the underlying repo_update subscription existed, regardless if it was active or not.
Previously, report filter values were not trimmed of whitespace prior to previewing a report. This issue is now fixed.
When sorting a report by a column that contains null values, the sorting order was incorrectly handled. This issue has now been addressed.
When deleting event(s) from the Events view, the confirmation modal buttons have had their language updated to be more descriptive. Instead of ‘Yes’ or ‘No’, the buttons now read ‘Delete’ and ‘Cancel’.
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs.
12.1.16 - Anchore Enterprise Release Notes - Version 5.8.1
Anchore Enterprise v5.8.1
Anchore Enterprise release v5.8.1 contains targeted fixes and improvements.
Attention Feed Service
In the future v5.10.0 release, the Feed Service will be obsolete and replaced by a new Enterprise service that will import feed data directly from Anchore every six (6) hours. The future v5.9.0 release will be the last to use the Feed Service on-premises. The v5.10.0 release will also provide enhanced support for air-gapped deployments. The goal of this change is to reduce operational burden for our end users and allow for faster response to changes in upstream data providers. More information about this migration will be provided leading up to the release of v5.10.0.
If upgrading from a release in the range of v5.0.0 - v5.3.0
The upgrade will result in an automatic schema change that will require database downtime. We are anticipating that this schema change may take more than an hour to complete depending on the amount of data in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
If upgrading from a release in the range of v5.4.x - v5.6.0
The upgrade will result in an automatic schema change that will require database downtime. We expect that this could take up to 2 hours depending on the amount of data in your system.
If upgrading from the v5.7.0 release
The upgrade will result in an automatic schema change that will require minimal database downtime.
If upgrading from the v5.8.0 release, no additional action is needed.
Fixes
Resolves an issue in the kev list policy trigger added in v5.8.0 that prevented it from trigger on vulnerabilities matched from some data sources.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
Package Feeds and Policy Gates for Ruby Gems and NPMs, are now deprecated. Please contact Anchore Support for more information.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The Feed Service is deprecated in v5.8.0. Starting in v5.10.0 a new service will be introduced to synchronize Feed data from Anchore.
12.1.17 - Anchore Enterprise Release Notes - Version 5.8.0
Anchore Enterprise v5.8.0
Anchore Enterprise release v5.8.0 contains targeted fixes and improvements.
Attention Feed Service
In the future v5.10.0 release, the Feed Service will be obsolete and replaced by a new Enterprise service that will import feed data directly from Anchore every six (6) hours. The future v5.9.0 release will be the last to use the Feed Service on-premises. The v5.10.0 release will also provide enhanced support for air-gapped deployments. The goal of this change is to reduce operational burden for our end users and allow for faster response to changes in upstream data providers. More information about this migration will be provided leading up to the release of v5.10.0.
If upgrading from a release in the range of v5.0.0 - v5.3.0
The upgrade will result in an automatic schema change that will require database downtime. We are anticipating that this schema change may take more than an hour to complete depending on the amount of data in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
If upgrading from a release in the range of v5.4.x - v5.6.0
The upgrade will result in an automatic schema change that will require database downtime. We expect that this could take up to 2 hours depending on the amount of data in your system.
If upgrading from the v5.7.0 release
The upgrade will result in an automatic schema change that will require minimal database downtime.
Improvements
KEV (Known Exploited Vulnerabilities) Support
The KEV list is now available to be ingested as a Vulnerability Annotation feed within the Feed Service. The KEV list feed will be enabled by default within the helm chart. See Feeds for more info.
A new KEV List Trigger is now available as part of the Vulnerability Policy Gate. See Policy Checks for more info.
This replaces the CISA KEV Vulnerabilities Policy Pack, which can be removed after validating the behavior of this new trigger.
Improve the obfuscation of user credentials in the logs.
Allowlist entries can now include a specific package version. This can be accomplished by adding both the Package Name and Version in the “Package” field within the allowlist UI editor.
Improved the authentication path performance when using the User Group feature at scale.
Fixes
Improves error logs found in the report-worker service to include better information when an error occurs.
Fixes the issue where a success status is returned when deleting an image without the force flag when the image is not allowed to be deleted. This can occur when it is the latest image of the tag or if it has active subscriptions.
Fixes an issue where a repository watch subscription can be created or activated without having the proper RBAC permissions.
Removal of obsolete report-worker task data in the database. This would have no effect on the running system. The cleanup will take place during the db schema migration and is just a small cleanup of old data within the database.
Account Deletion
Ensure that the system will properly clean up an account and its associated data when the account name contains special characters.
Ensure that the system will properly delete any RBAC Principals associated with the account.
If the Disallow Native User feature is enabled, the system will now properly prevent access to GraphQL endpoints and System endpoints by native users.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
Package Feeds and Policy Gates for Ruby Gems and NPMs, are now deprecated. Please contact Anchore Support for more information.
The enterprise-gitlab-scan plugin is being deprecated in favor of using AnchoreCTL directly in your pipelines. Please see GitLab for more information on integrating Anchore Enterprise with GitLab.
The Feed Service is deprecated in v5.8.0. Starting in v5.10.0 a new service will be introduced to synchronize Feed data from Anchore.
UI Updates
Improvements
The Kubernetes view has been refactored with an improved data
retrieval strategy to allow the component to work at a larger scale.
Summary information is now fetched independently of the main
dataset, and data fetches for the cluster and namespace tiers are
now compartmentalized. Additional improvements have been made to the
filtering and data composition operations to enhance performance and
reduce time to availability. Please note that the reports service
must be enabled to use this view.
Fixes
The error component used to display inline errors would overflow if
the error information was too voluminous, sometimes exceeding the
height of the viewport. The control is now constrained to a maximum
height and is scrollable.
Several issues related to context-based routing, introduced in the
previous release, were discovered. These issues primarily affected
legacy routes that did not contain an account entry upon logging in.
Additionally, a fix has been provided for manually changing the
context in the URL for routes with URI encoded entries (such as
Artifact Analysis). Previously, these routes would lose encoding
on reload, resulting in a 404 error. These and other routing
issues have now been addressed.
Adding an LDAP URI without the ldap:// or ldaps:// protocol
would crash the app when testing the configuration or logging in
using LDAP. Guards against this error are now in place, and the
protocol prefix is now mandatory.
Changing permissions could sporadically cause the app to crash due
to an error in the event broadcast triggered by this action. This
issue has been resolved.
Under certain circumstances, an error response from the SSO provider
during authentication would crash the app. Error handling has been
updated to gracefully manage errors and provide detailed information
to the user.
In deployments where SSO is the sole authentication scheme, the
LDAP authentication option was still present on the login page.
This is no longer the case.
When an error occurred during the operation of submitting a
repository for analysis, the toast message describing the problem
was not raised. This issue has been addressed.
Due to a missing role-based access control permission, users without
the createRepository permission could still interact with the
Watch Repository toggle. This issue is now fixed.
Previously, it was not possible to add more than one annotation from
the Metadata tab in the Artifact Analysis view.
Additionally, adding a single annotation would result in an
erroneous redirect. Both issues have been addressed.
Non-Chrome users who had not previously set their view theme would
find the app defaulting to dark mode after invoking the print view
control (present in the Policy Compliance and
Vulnerabilities tabs). This issue has been resolved.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
12.1.18 - Anchore Enterprise Release Notes - Version 5.7.0
Anchore Enterprise v5.7.0
Anchore Enterprise release v5.7.0 contains targeted fixes and improvements.
Attention
The v5.5.0 release changed the defaults for the feed provider’s configuration. The new defaults will import results published by Anchore every six (6) hours. This will reduce configuration to multiple sources, provide the NVD with Anchore Enriched data, as well as make GitHub Security Advisories available to customers that have firewall constraints. Please ensure that you have access to https://enterprise.vunnel.feed.anchore.io for uninterrupted feeds service.
If upgrading from a release in the range of v5.0.0 - v5.3.0
The upgrade will result in an automatic schema change that will require database downtime. We are anticipating that this schema change may take more than an hour to complete depending on the amount of data in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
If upgrading from a release in the range of v5.4.x - v5.6.x
The upgrade will result in an automatic schema change that will require database downtime. We expect that this could take up to 2 hours depending on the amount of data in your system.
Improvements
Adds the ability for users to override the base image used throughout the system. This is accomplished by adding an image annotation to the image anchore.user/marked_base_image.
API endpoints /v2/images/{image_digest}/check and /v2/images/{image_digest}/vuln/{vuln_type} now take auto as a value for base_digest parameter. This will allow the system to determine which ancestor will be used as the Base Image.
This feature is enabled by default in v5.7.0. To disable this feature, set services.policy_engine.enable_user_base_image to false in the values.yaml file.
API access for users configured for native access can now be disabled by setting anchoreConfig.user_authentication.disallow_native_users to true in the values.yaml file.
Adds info level log messages to runtime inventory post handlers.
Improves report Vuln ID Filter description to include CVEs.
Removes the image_cpes database table that is no longer used and can consume a large amount of database space.
Improve validation of object_store and analysis_archive settings during startup.
Response object GET /v2/rbac-manager/my-roles now includes more detail about the account for each role.
Admin users can now create an API Key that can be used to manage Accounts, User Groups and RBAC Roles.
Reduced the size of the Enterprise Image.
Fixes
The Fix Observed At value on vulnerabilities from all ecosystems now display correctly.
Deployments using db as their object store driver will now be able to store large objects over 1GB in size. This means very large SBOMs will now successfully store.
Addresses an issue where account deletion didn’t fully clean up db artifacts stored for the account. Example is some reporting data.
The CycloneDX SBOM now contains the bom-ref field as part of the output.
Allow users with read-only or read-write RBAC Authorization to have the following permissions:
getECSContainers
getECSServices
getECSTasks
getKubernetesClusters
getKubernetesVulnerabilities
listRuntimeInventories
getKubernetesNamespaces
getKubernetesContainers
getKubernetesNodes
getKubernetesPods
Fixes an issue in the policy_creation counter found in the GET /v2/system/statistics endpoint.
Explicit SAML Users are now allowed to use the : character in usernames.
Account names are now prevented from being created with the # character.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
Package Feeds and Policy Gates for Ruby Gems and NPMs, are now deprecated. Please contact Anchore Support for more information.
UI Updates
Improvements
The login page has been updated with a new design that uses tabs
to switch between configured authentication methods. When multiple
authentication methods are available, tabs are shown for each available
method. The user’s last-selected method is remembered and shown as the
default tab on subsequent visits.
Anchore Enterprise now supports a Single Sign-On (SSO) only mode.
This mode allows administrators to disable the local authentication
mechanism, which removes the default login form. This is an opt-in
feature enabled by setting the sso_auth_only configuration option
to True.
The Analyze a Tag control has been updated to allow users to
provide a SHA256 digest for the image they wish to analyze. This
feature is useful when you only want to analyze a specific image.
In addition, you can now populate the Registry, Repository,
and Tag fields by pasting a pull string (e.g.,
docker pull docker.io/library/alpine:latest) in the inline control
provided.
The reported base image in the Artifact Analysis view now
reflects changes made within our platform services, whereby the
system can either make the determination automatically or have the
base image specified by an anchore.user/marked_base_image
annotation associated with an image in the ancestry.
Fixes
Previously, the selected default entry in the table page size
dropdown was not being set correctly when opened, and was defaulting to
the first entry. This has now been addressed.
Our application security policies have been updated to prevent
client-side caching, the execution of arbitrary code within our
dependent packages using eval(), and the HTTP Strict Transport
Security (HSTS) header has been added to enforce the use of HTTPS
connections and to remove the ability for users to click through
warnings about invalid certificates.
Within Artifact Analysis, when the route for this view (and the associated
compliance data request) contained the fat manifest digest, the image_digest
returned would still be the platform-specific digest. This caused an
equality check with the route to fail. This has now been fixed.
The Vulnerability ID filter description has been updated to
clarify that it filters the Vulnerability and CVE fields.
The Delete Events modal within the Events tab was successfully
deleting events in batches, but the progress bar was not visually
updating to indicate this. This has now been fixed.
The calculation in the Dashboard view that describes how many
vulnerabilities were affecting how many repositories was inaccurate because the
summarization included duplicate entries. This was a consequence of
different vulnerabilities against the same repository advancing the
repository count. This has now been corrected.
An issue with the policy allowlist data payload was preventing
updates (such as removals) from taking place against allowlists displayed by the
associated dialog in the Artifact Analysis view. Now fixed.
The donut chart displayed in the printable version of the Policy
Compliance tab in the Artifact Analysis view was not positioned correctly.
This has now been fixed.
Boolean values for annotations are now displayed correctly.
The Twitter social media logo has been updated to 𝕏 to reflect the change in
brand and name.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
12.1.19 - Anchore Enterprise Release Notes - Version 5.6.0
Anchore Enterprise v5.6.0
Anchore Enterprise release v5.6.0 contains targeted fixes and improvements.
Attention
The v5.5.0 release changed the defaults for the feed provider’s configuration. The new defaults will import results published by Anchore every six (6) hours. This will reduce configuration to multiple sources, provide the NVD with Anchore Enriched data, as well as make GitHub Security Advisories available to customers that have firewall constraints. Please ensure that you have access to https://enterprise.vunnel.feed.anchore.io for uninterrupted feeds service.
If upgrading from a release in the range of v5.0.0 - v5.3.0
The upgrade will result in an automatic schema change that will require database downtime. We are anticipating that this schema change may take more than an hour to complete depending on the amount of data in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
If upgrading from a release in the range of v5.4.x - v5.5.x
The upgrade will result in an automatic schema change that will require database downtime. We expect that this will take between 2 and 15 minutes depending on the amount of data in your system.
Improvements
/v2/system/statistics API endpoint now includes creation and current counts of runtime inventory and associated metadata.
/v2/system/feeds and /v2/system/feeds/{feed} API endpoints now include the last updated time for the feed groups.
Artifact Lifecycle Policies now include a new policy condition to preserve base images.
Deployment history now includes the initial deployment information.
/v2/images and /v2/summaries/image-tags API endpoints now include an optional flag analyzed_since to help reduce the amount of data returned.
Fixes
Ensures that the layer cache is cleared periodically.
Ensures image imports are not removed until they have been completely processed.
Fixes inconsistent return values when specifying registry data to the POST /v2/registries and /v2/registries/{registry_name} endpoints.
Improves the validation of data posted to the /v2/ecs-inventory endpoint.
Improves the validation around the object store compression setting. Appropriate error messages are now available in the log during startup.
Resolves an issue with Policy evaluation and in Reports where inherited_from_base information for vulnerabilities was calculated against the image with the fewest layers in common instead of the most.
Fixes an issue caused by expired image imports that resulted in logs being flooded with validation errors.
Promptly load new tags and policy evaluations for existing images into the reporting system.
Fixes the Ubuntu 24.04 mapping within Enterprise to be noble based on the security announcement. It was previous mapped to numbat incorrectly.
The Stale Feed Policy Gate trigger now uses the last updated time per feed group.
Fixes a deadlock seen by the report-worker service while updating the runtime inventory data in the reporting system.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
In the upcoming v5.7.0 release, the support for package feeds, Ruby Gem and NPM will be deprecated. Please contact Anchore Support for more information.
UI Updates
Improvements
By default, the account context is now included within the URL when
navigating throughout the application. This change allows users to
bookmark or share links that will open the application in the same
account context as the original link, as long as they have sufficient
permissions to access the resource.
The Image Selection view has been further optimized to improve
performance when loading the different data tiers (registry,
repository, and tags) via server-side pagination, filtering, and
sorting. This optimization should reduce the time taken to present
the information in each of these tables.
The Dashboard view has been optimized to improve the Time to
Interactive (TTI) on load. Calculation of Dashboard metrics can take
a significant amount of time, so we now allow the pending metrics
data to continue loading without blocking the UI. Since the Dashboard
view is typically the default on login, this change allows users to
navigate elsewhere if desired.
When creating or editing an image retention policy within the Data
Management view, the option to exclude base images from removal
is now available.
When creating a SAML Provider Configuration, the system role account-viewer
is no longer available to be set as the default role.
Fixes
The View Incomplete Analyses modal within the Images tab had the
ability to toggle between listing pending, analyzing, and failed
images across your account or for the registry, repository, or tag
you were viewing. This was removed in a previous release, but has now
been reinstated.
When certain modals were open and a forced logout was triggered due
to a permission change or session expiration, the modal dimmer would
remain. This has now been fixed.
When creating a SAML Provider Configuration, the system role
account-viewer is no longer available to be set as the default role.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
The v5.5.1 release has changed the defaults for the feed provider’s configuration. The new defaults will import results published by Anchore every six (6) hours. This will reduce configuration to multiple sources, provide the NVD with Anchore Enriched data, as well as make GitHub Security Advisories available to customers that have firewall constraints. Please ensure that you have access to https://enterprise.vunnel.feed.anchore.io for uninterrupted feeds service.
If upgrading from a release in the range of v5.0.0 - v5.3.0
The upgrade will result in an automatic schema change that will require database downtime. We are anticipating that this schema change may take more than an hour to complete depending on the amount of data in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
If upgrading from the v5.4.x release, no additional action is needed.
Improvements
A small change to improve the responsiveness of the Runtime Images by Vulnerability report. This change will reduce the time it takes to generate the report under certain conditions.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
12.1.21 - Anchore Enterprise Release Notes - Version 5.5.0
Anchore Enterprise v5.5.0
Anchore Enterprise release v5.5.0 contains targeted fixes and improvements.
Attention
The v5.5.0 release has changed the defaults for the feed provider’s configuration. The new defaults will import results published by Anchore every six (6) hours. This will reduce configuration to multiple sources, provide the NVD with Anchore Enriched data, as well as make GitHub Security Advisories available to customers that have firewall constraints. Please ensure that you have access to https://enterprise.vunnel.feed.anchore.io for uninterrupted feeds service.
If upgrading from a release in the range of v5.0.0 - v5.3.0
The upgrade will result in an automatic schema change that will require database downtime. We are anticipating that this schema change may take more than an hour to complete depending on the amount of data in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
If upgrading from the v5.4.x release, no additional action is needed.
Improvements
Feeds Service
Defaults to using the anchore produced workspaces for each vulnerability feed provider. These workspaces are updated every six (6) hours. Please see Feeds for more detailed information.
Ubuntu 24.04 Feed Provider is now supported.
Reports
Reduced the number of links to upstream sources of vulnerabilities within the reports by adding a new field in the reports. This should be seamless to users of the UI reporting service.
Authentication
Improved the error message returned when requesting an API Key expiry date exceeds the configured setting.
Removes the restriction that prevents creation of SSO users explicitly in Anchore when sso_require_existing_users is not set. SSO users may now be created manually and associated with an IDP by user admins regardless of the configuration of the IDP integration. This is only available directly via the API.
API
Provides a new endpoint to download the compatible version of AnchoreCTL directly from the product. GET /system/anchorectl
Provide new value of stateless_sbom_analysis from GET /system/statistics
AUDIT Logs
The helm chart now provides the ability to disable the AUDIT logging that was introduced in v5.4.0. The default is set to enable.
Removes false-positive vulnerability matches on the kernel headers packages for RHEL and Debian when the match is on the full kernel and the kernel is not present in the image.
Better handle overlapping vulnerability scans for the same image.
Better detection of vulnerabilities for Calico images.
Improved error messages for misconfigured S3 buckets during service startup.
Fixed the filter of Vendor Only when used by the Vulnerabilities Policy Gate and Package Trigger.
Better handle Runtime Inventory that contains missing IDs.
Reports, Vulnerabilities by ECS Container, Vulnerabilities by Kubernetes Container, and Vulnerabilities by Kubernetes Namespace, no longer produce results that are not part of the current inventory tracked by Catalog Service. This behavior is now the same as other provided reports.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
UI Updates
Improvements
The Image Selection view has been optimized to improve
performance when loading the different data tiers (registry,
repository, and tags). This optimization should reduce the time
taken to present the information in each of these tables.
The report templates that contain links to external references
now use the Image Link field by default, replacing the
(deprecated) Links field. This prevents duplication of results
where the only differences between row entries were the links
themselves.
Fixes
Operations against the services utilized by the Inventory view
are now correctly logged in the system logs.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
If upgrading from a release in the range of v5.0.0 - v5.3.0
The upgrade will result in an automatic schema change that will require database downtime. We are anticipating that this schema change may take more than an hour to complete depending on the amount of data in your reporting system.
If your Anchore Enterprise deployment is on FIPS enabled hosts and your database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
If upgrading from the v5.4.0 release, no additional action is needed.
Improvements
Enables delivery of Anchore augmentation to vulnerability records to enable a better vulnerability scanning experience. This enables Anchore to minimize customer impact from the current NVD analysis slowdown and ensure accurate scan results.
In order to provide the best experience, we have 3 configuration options available.
NVD Direct Mode - No changes are needed. You will continue to receive the vulnerability data from NVD as you do today.
NVD Direct Mode with Anchore Enrichment - Allowing Anchore to enrich NVD entries by adding CPE string(s) which allows Anchore Enterprise to correctly match on new vulnerabilities. Requires access to GitHub.
NVD Proxy Mode with Anchore Enrichment - In this mode, Anchore produces the resulting workspace of the Anchore Enrichments and publishes it in https://enterprise.vunnel.feed.anchore.io. This allows users to consume the Anchore NVD Enriched data without needing access to GitHub.
For more configuration details please review NVD Provider.
NVD with Anchore Enriched data is not currently providing any severity information. By definition only NVD can supply NVD CVSS scores.
Note
The future v5.5.0 release will change the default for the feed provider’s configuration. The new default will import results published by Anchore every 6 hours. This will reduce configuration to multiple sources, provide the NVD with Anchore Enriched data, as well as make GitHub Security Advisories available to customers that have firewall constraints.
Fixes
Resolves issue with uploading runtime inventory that contains unicode characters.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
UI Updates
Fixes
A fix has been applied to the image summary data processing operation
that calculates the artifact taxonomy for registries, repositories,
and tags. Ports are now correctly handled when included in the registry
value.
If your Anchore Enterprise deployment is on FIPS enabled hosts and the database is being hosted on Amazon RDS, an upgrade to Postgres 16 or greater is required. For more information please see the FIPS section in Requirements.
Improvements
AUDIT event logs have been added to the API Service for the following endpoints
We have simplified the Anchore Enterprise deployment by removing the need to create RBAC Authorizer Service and RBAC Manager Service. RBAC functionality within the product is unchanged.
Reports
Reports which contain vulnerability information have a new column for CVEs.
The CVEs may be different from the Vulnerability ID which was used to match on if it was an Advisory ID.
The CVEs column may contain N/A if a CVE has not been published or detected for the Advisory’s ID yet.
Current saved reports remain unchanged. To see this new column, you will need to generate a new saved report.
The Vulnerability ID Filter has been updated to work on both the Vulnerability ID and the CVEs.
API
/system/deployment-history is a new endpoint that returns a history of your future upgrades.
/system/statistics endpoint now includes the number of total number of policy creations, the current number of policies in the deployment, and the total number of policy evaluations.
Fixes
Policy delete now properly removes document store artifacts from this policy.
Improves the account creation errors returned to the user when the failure is regarding policy creation.
Deletion of an image will no longer cause other images to return 500 errors. This could occur when the two image shared the same image ID.
Fixes the Policy Gate: Tag Drift Trigger failure that was seen when multiple versions of the tag existed and the comparison was against the newest one.
Improves the archive rule deletion errors returned to the user when they did not have permissions for the operation.
Return the image content even when the parent digest is being used for the request. This was seen in a error in anchorectl image content.
Improves errors from POST /rbac-manager/roles/{role_name}/members
when the user is an admin user
when the username is not valid or is a reserved system name
Improves errors from POST /system/user-groups/{group_uuid}/users
when the user is an admin user
when the username is not valid or is a reserved system name
Improves errors from POST /system/user-groups
when the user group name is a reserved system name
when the user group name overlaps with a username
Fixes the response of PATCH /system/user-groups/{group_uuid} to return the entire user group value.
Fixes a 500 error in the Action Workbench when selecting a notification endpoint.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
UI Updates
Improvements
The User Groups view provides a summary of all user groups and
the accounts associated with each group. From this view,
administrators can create, edit, and delete user groups, and define the accounts and associated permissions accessible to
users within each group. Native users, LDAP users, and SSO users can
all be assigned to user groups from their respective Add or
Edit dialogs.
A performance improvement has been applied to the image summary data
processing operation that calculates the artifact taxonomy for
registries, repositories, and tags. This improvement should reduce
the time taken to present the Image selection view.
Fixes
A default of N/A has now been provided for empty entries in the
CVEs column of the Vulnerabilities tab. This change ensures
that the CVEs column is always populated with data, even if the
vulnerability has no associated CVEs.
During template creation, we identified an issue where the state of
unchanged boolean filters marked as False was incorrectly
recorded as null after being saved. This error caused the filter
to be omitted from any report queries generated from that template.
While the issue was resolved in the 5.3.2 release for new
templates, pre-existing templates remained unchanged. An AppDB
migration has been added to automatically correct this issue for
existing templates.
The Last Seen popup contained broken links to the Inventory
page for ECS containers. Images of this type are not currently
supported in the Inventory view, and the links have now been
removed.
Reports downloaded from the Reports view that contained multiple
CVE entries would not display correctly in the CSV format on account
of the data itself being comma-separated. This issue has now been
addressed.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
12.1.24 - Anchore Enterprise Release Notes - Version 5.3.0
Anchore Enterprise v5.3.0
Anchore Enterprise release v5.3.0 contains targeted fixes and improvements.
Enterprise Service Updates
Announcements
Note
In the future v5.4.0 release, any Anchore Enterprise that is deployed on FIPS enabled hosts with its database being hosted on Amazon RDS, will be required to be running with a Postgres Version of 16 or greater. For more information please contact Customer Support.
Requirements
If upgrading from a v5.x release, a database update is required.
Providing the ability for an administrator to define one or more RBAC Roles for one or more accounts within an User Group. The administrator has the ability to add and remove users from the User Groups. These users will automatically have the privileges as defined by the User Group in addition to any explicitly assigned RBAC Roles.
Policy
Policy packages gate has a new metadata trigger and provides the following parameter values:
Package type to exact match against
Package name to match against (supports wildcards)
Package version to match against (supports wildcards)
Allowlists can contain either CVEs or corresponding Advisory IDs and work the same regardless of which was used to match the Trigger ID.
Reports
Report executions that fail to complete after 3 attempts will be cancelled. The report will continue to be executed on any defined schedule.
Improved the description of the Current Only filter in reports that contain tag information.
The /system/statistics endpoint now includes the number of successful policy evaluations and the number of reports generated.
Improved the performance of the background task that deletes older runtime inventory based on the configuration value inventory_ttl_days.
Improved the performance of Policy Evaluations.
Improved the behavior of the GitHub Vulnerability Provider when a token is not provided. The system will automatically disable this provider and log a warning message to alert the user.
Fixes
Addressed an issue where the policy’s dockerfile gate with effective_user trigger could not determine the effective user.
Enterprise provides better handling of NuGet packages.
Syft v0.105.0 improved its ability to search common patterns within a go binary. This should resolve an issue determining the version where the main module is (devel).
Addressed a failure seen by all the feed providers when the GitHub Token was set to NULL instead of an empty string.
Fixed the Policy distro gate when the version field was a non-numeric value (ie latest).
Policy Engine has improved its validation of the grype-db during startup.
JAR filenames, which had an underscore in their names, are now parsed correct in SBOMs.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
UI Updates
Improvements
The results table within the Vulnerabilities tab now contains a
CVEs column that lists all CVEs associated with each
vulnerability.
NVD CVSS Base Score is now included in the CSV and JSON reports that
are generated from the Vulnerabilities tab in the Artifact
Analysis view. In addition, a CVEs column has also been added in
order to fully represent every CVE associated with each
The results view for a report now contains the following additional
details:
Results generation started at
Results generation completed at'
Fixes (v5.3.1)
Due to a regression accidentally introduced in version 5.2.0, the migration of reports predating 5.0.0 would fail upon upgrading to 5.2.0. This failure resulted in a service error when attempting to view the report from the Saved Reports view. This issue has now been resolved.
In rare cases, the Accounts view would return a 404 if it tried to fetch users from an account that had been deleted by another admin. This issue has now been addressed.
Due to a regression in 5.3.0, the calendar widget available in Events and Policies was not centered correctly. This issue has now been resolved.
Fixes (v5.3.0)
A fix has been provided for an issue where reports that have no
results either serve corrupted (JSON) or empty (CSV) files on
download. This issue has now been addressed.
In previous releases, the timestamps displayed in the Report
Results view were not correctly calculated if the page was
visited directly via URL, or if the page was refreshed. Now fixed.
Deleted image retention policies are no longer displayed in the
System > Data Management view (admin only).
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
Enterprise v5.2.0 requires Postgres 13 or greater.
Enterprise v5.2.0 requires that the previous version was Enterprise v4.0.0 or greater. Strongly recommend that you upgrade to Enterprise v4.9.5 prior to attempting this upgrade.
Enterprise v5.2.0 requires the use of the Enterprise Helm Chart. Please see below the table containing compatible version.
Enterprise v5.2.0 requires that you upgrade your integrations and client. Please see below the table containing compatible versions.
Improvements
RBAC Roles
Adds new system role called account-viewer. This role allows the user to list all the accounts within Anchore Enterprise. Authorization to bestow this role is restricted to system administrators.
Reports
Provides a configuration variable, services.reports.use_volume, which directs the Report Service to use disk space instead of memory while generating reports.
The “Inherited From Base” field is now available the vulnerability-related reports including:
Artifacts by Vulnerability
Images Affected by Vulnerability
Runtime Inventory Images by Vulnerability
Tags by Vulnerability
Vulnerabilities by ECS Container
Vulnerabilities by Kubernetes Container
Vulnerabilities by Kubernetes Namespace
Improves the performance of the Kubernetes Namespace Vulnerability Loader within the Report Worker Service.
API
Adds a /system/statistics endpoint to return various system statistics and counters over time.
The /images/{image_digest}/vuln/{vuln_type} endpoint provides a query flag, include_vuln_description, that indicates when to include the vulnerability description field in the response.
Provides a new field, password_last_updated, in the response of /accounts/{account_name}/users.
API Keys
Provides a configuration variable, user_authentication.remove_deleted_user_api_keys_older_than_days, which determines the number of days API Keys will remain in the database.
Fixes
Corrects the time that a Scheduled Query started to be generated in the unlikely occurrence that system restarted the report.
Addresses an issue with the RedHat vulnerability data provider not automatically updating OVAL files which prevents getting accurate fix version information for appstream packages in RHEL 9.
Addresses an issue with grype-db matching logic for RHEL 9, where they are no longer reporting a modularity, resulting in false positives. Specifically, RHEL 9’s default stream no longer reports a modularity.
API endpoint /images/{image_digest}/content/java returns a version format consistent with the output from AnchoreCTL.
Fixes an issue where the services.reports_worker.data_egress_window was not working correctly for the runtime reports.
Fixes a failure in the Source SBOM import that refer to poetry.lock or python requirements files.
An interrupted report generation will correctly error out correctly instead of trying to persist a partially generated report.
Fixes an issue where CVE-2023-44487 would show the incorrect severity.
Licenses for all package content types are now returned when available.
Cpes property returns a list of strings or an empty list for all package content types.
Reintroduced the Policy Evaluation Cache which aids in better evaluation performance.
Logging
Reduces the number of log warning messages for orphaning services.
Suppress an SQLite exception that was not impacting the system.
Removes an incorrect error message in the Reports Service that looked like the following “Could not trigger reports_image_refresh after multiple retries. Will retry on next cycle”.
Deprecations
Support for OpenStack Swift, which is an open-source object storage system, has been deprecated. Please see Object Storage for a list of supported Object Stores.
UI Updates
Improvements
Administrators can now assign the system-wide account-viewer role
to users. This role allows users to list all accounts in the system
and is intended for programmatic access to the Anchore API.
Administrators can now view the last time a user password was
changed from the summary table in the Accounts view.
The error indicator for a failed report has been updated to provide
more information about the failure.
From within the new Data Management view, administrators can
now set policies to determine the removal schedule for images
in the system across all accounts. The policies allow you to
specify the number of days to retain images, based on either
presence in the runtime inventory or their presence globally.
Logs are now written to a file (by default in the
/var/log/anchore directory) in addition to the console. The
logs are rolled once a maximum capacity of 10Mb is reached, and
the last 10 log files are retained. In addition, outbound
requests made by the application to our Anchore Enterprise API now
display the request identifier used within our services, which can
be used to correlate the UI request with the platform service
logs.
A Licenses column has been added to the Java sub-tab.
The "Inherited From Base" field has been added as a default to a variety
of vulnerability-related reports including:
Artifacts by Vulnerability
Images Affected by Vulnerability
Runtime Inventory Images by Vulnerability
Tags by Vulnerability
Vulnerabilities by ECS Container
Vulnerabilities by Kubernetes Container
Vulnerabilities by Kubernetes Namespace
Fixes
Administrators who switch into a different (non-administrative)
account context are no longer able to create global reports in
that account.
Previously, when a saved report was reconfigured (for example, by
changing the name or description), the filter details would be
dropped from the AppDB record, preventing the report from being
viewed (although it would still be available for download). This
issue has now been fixed.
Administrators who are authenticated via LDAP are now able to
create and manage API keys for non-LDAP administrative and standard
users (although not for themselves, because we currently don’t
support API Key self-service for LDAP authenticated users).
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
Enterprise v5.1.1 requires Postgres 13 or greater.
Enterprise v5.1.1 requires that the previous version was Enterprise v4.0.0 or greater. Strongly recommend that you upgrade to Enterprise v4.9.3 prior to attempting this upgrade.
Enterprise v5.1.1 requires the use of the Enterprise Helm Chart. Please see below the table containing compatible version.
Enterprise v5.1.1 requires that you upgrade your integrations and client. Please see below the table containing compatible versions.
Improvements
Reports
Performance improvement in report generation.
Reduction of database swap space needed during report loading.
Enterprise v5.1.0 requires Postgres 13 or greater.
Enterprise v5.1.0 requires that the previous version was Enterprise v4.0.0 or greater. Strongly recommend that you upgrade to Enterprise v4.9.3 prior to attempting this upgrade.
Enterprise v5.1.0 requires the use of the Enterprise Helm Chart. Please see below the table containing compatible version.
Enterprise v5.1.0 requires that you upgrade your integrations and client. Please see below the table containing compatible versions.
Support for API Keys. API Keys are manually generated credentials used during authenticate with Anchore Enterprise. For more information, please see API Keys
Note: This feature is not currently available for users who have authenticated using LDAP
Vulnerabilities
Provide additional vulnerability matching for goCompiledVersion.
Provide vulnerability matching for pre-released versions of Debian.
Support capture of vulnerability data for Ubuntu 23.04 (Lunar Lobster) and Ubuntu 23.10 (Mantic Minotaur) once publishing commences from Canonical.
Analysis
All namespaced python packages are persisted during analysis which improves displaying the installed location for python packages.
Reports
Report generation can be scaled out to multiple report pods.
Runtime reports now work with the enable_data_egress and data_egress_window configuration options. Please review Reports for more information.
Improved report service logging to provide better error messages.
Runtime report filters for Labels now supports multiple labels.
RBAC Roles
image-lifecycle - permissions around management of archival rules.
registry-editor - permissions to manage private registry credentials.
General System Improvements
Improve memory profile and behavior in the API service.
Improve logging within the feed service.
Provide clear logging of the service version and db schema during startup.
Fixes
Better error handling for policies that are missing data from the document store.
Ability to execute a software downgrade from a patch release to a release within the Major.Minor version numbers.
Prevent a deadlock when two agents are reporting inventory from the same Cluster/Namespace.
If report generation exceeds the configured timeout execution record will be marked as timed out and processing will be halted to allow other scheduled reports to start.
Vulnerability matching now properly accounts for maven versions according to the maven spec rather than the plain semver spec.
Fixed an issue that prevented new Windows OS containers from being analyzed properly.
Image digests will now match when an image is analyzed within Enterprise (centralised analysis) and the image SBOM is imported via AnchoreCTL (distributed analysis).
If an error occurs during database upgrade, the error will be elevated to the pod to prevent it from starting.
Image import that contains a secret or content search results, will now have the correct line number and name translations.
Fix a grypedb digest mismatch that can occur when Policy Engine syncs with the Feed Service.
UI Updates
Improvements
API Token Support
Users can now create and manage API keys for use with the Anchore API. Administrators can control the keys for all users from the System > Accounts view, and all users can create or revoke their own keys from the dropdown menu in the top navigation bar.
Note: This feature is not currently available for users who have authenticated using LDAP
Application Vulnerabilities
Vulnerabilities data for an application group can now be downloaded in JSON format from the Applications view
The Artifact Analysis view now indicates, if available, the fat manifest ID associated with the currently selected artifact in the breadcrumb trail
The Artifact Analysis > SBOM view now includes a Version column to the Java sub-tab
Reports
The Vulnerabilities by ECS Container report now provides the Will Not Fix and Last Seen fields
The Vulnerabilities by Kubernetes Container report now provides the Last Seen field
The Fix Observed At field has been added as a default to a variety of vulnerability-related reports
Help text improvements have been made to the filters associated with runtime-related reports
Accounts
The email address associated with an account can now be updated by an administrator
The roles provided in the user-creation dialog within an account are now alphabetically sorted
UI Theme
A dark theme has been added to the application. This can be enabled by clicking the Dark Mode toggle in the top right of the UI. By default, the theme will follow the system theme, but it can be overridden by the user.
Fixes
Reports
Any previous errors are now cleared when the configuration dialog is opened. In addition, the title of the dialog no longer changes as a new name is entered.
The Report Results page displayed the execution schedule as UTC, which was inconsistent with the information shown in the Saved Reports view, where it is converted to the local timezone. Now fixed.
Licenses are now displayed correctly in the Artifact Analysis > SBOM view; previously they would be displayed as Unknown
Image Selection
A significant performance improvement has been applied to the repository summary operation that presents the interstitial dialog when adding a repository
Clicking an enabled alert subscription toggle for tags that inherit their subscription state from their parent repository would not disable the subscription for the tag; instead, a new subscription would be added for that specific tag, with another tag required to actively disable the entry. This has now been fixed
Various supporting libraries have been updated to improve security and performance, and to remove deprecation warnings from both browser and server output logs. Redundant libraries have been removed to reduce the application’s startup time and overall size.
Enterprise v5.0.0 requires Postgres 13 or greater.
Enterprise v5.0.0 requires that the previous version was Enterprise v4.0.0 or greater. Strongly recommend that you upgrade to Enterprise v4.9.0 prior to attempting this upgrade.
Enterprise v5.0.0 requires the use of the Enterprise Helm Chart v2.0.0.
Enterprise v5.0.0 requires that you upgrade your integration and client. Please see below the table containing compatible versions.
Improvements
V2 API
The Anchore Enterprise API has been updated. For complete details, please review Migrating from API V1 to V2.
The Anchore Enterprise API is found in the API Service. The RBAC Manager API, Notifications API, and Reports API are now served through that same endpoint. Those services are now internal-only services for processing requests in the 5.0 release.
fix_observed_at is now returned as part of the GET /v2/images/{image_digest}/vuln/{vuln_type} endpoint response where a fix is available.
Reports
Scheduled Query Executions now contain a status field. Values include: pending, error, running, and complete.
The pagination of the scheduledQueries query has been improved. An additional query scheduledQueryExecutions has been added to allow pagination of the executions of a specific scheduled query.
Provided a Fix Observed Date for all report queries that contain vulnerabilities information. This Fix Observed Date is the date which Anchore observed that a fix was available.
Improved the Filter Descriptions within the runtime reports.
False Positive Reduction
Provide configuration settings so users can select which package types use CPE-base matching against NVD. For additional details, please review False Positive Management
Policy
Improvements in presentation and validation during policy editing have been made. Please see Policy for an overview on using policies.
New distro policy gate has been added with a deny trigger. Required parameters include the Name of the Distribution, Version of Distribution, and the Operation to perform the evaluation (ie. <, >, !=).
RBAC Roles
Provided a new user role called image-developer. Used alone, the role limits the user to viewing images, vulnerabilities, polices and policy evaluations.
Events
The ANCHORE_EVENT_RETENTION_AGE_DAYS has now been set to 180 days by default.
Runtime Inventory
Now supports a new configuration option inventory_ingest_overwrite which, when set to true, stores only the most recent
inventory per cluster/namespace. Note: the inventory_ttl_days continues to be available for use.
Fixes
Image Dockerfile Status now reports correctly even after a force re-analysis.
Images analyzed from runtime inventory now have the correct Dockerfile Status reported.
Policy
Improved Policy validation; The policy editor no longer allows saving policies with unknown elements.
Policy Name is now a required field during the creating of new policies.
Tag Drift Gate no longer fails with images analyzed with 4.9.x.
The createScheduledQuery mutation now returns correct returns the createdAt, updatedAt, and account fields.
A verbose warning log message in the Policy Engine Service, regarding sqlalchemy, has been attended to.
Addressed an exception in the Report Service when loading an image with an empty dockerfile_mode.
The report vulnerabilitiesByKubernetesContainer executes correctly even when node information is not present.
The V2 API now specifies the version field in the ContentJAVAPackageResponse. This is the response for GET v2/images/{image_digest}/content/java.
Fixed a scale issue where an image, which has been queued for analysis, can be garbage collected prior to being processed.
Deprecations
The anchore-cli has been deprecated and removed from the docker.io/anchore/enterprise image
AnchoreCTL is available within docker.io/anchore/enterprise image today
AnchoreCTL is the only supported command line tool for interacting with Anchore Enterprise.
KAI (Kubernetes Automated Inventory) no longer be compatible with Enterprise v5.0.0. A new version of this
agent, called anchore-k8s-inventory, is available now and compatible with Enterprise v4.7.0. You may start to migrate to this new agent today.
Support for REM (Remote Execution Manager) has been deprecated. It is no longer be supported in Enterprise v5.0.0.
Analyzer Service no longer supports multiple analysis threads. The concurrentTasksPerWorker value is no longer valid within the Enterprise Helm Chart. Analysis throughput should be increased by adding more analyzer pods instead.
UI Updates
Improvements
The Anchore Enterprise Client now uses the Anchore Enterprise V2 API. This transition should be transparent to users. However, if you encounter any issues, please contact support.
The Reports feature has been rebuilt to provide a more intuitive and streamlined experience for creating, scheduling, and managing reports. The new report manager is now the default view when you click the Reports icon in the main navigation bar. If any reports are already present, the Saved Reports tab will be displayed. If no reports are yet available, you will initially see the New Report tab. Once you have created at least one report, the Saved Reports tab will become available as the default.
This component offers the following enhancements:
Report composition is simplified, combining the capabilities of the previous Quick Reports and Report Manager features.
Scheduling has also been simplified. Reports can either be generated on demand or scheduled to run at a specific time.
Templates can now be created at any time, either from an ad-hoc report or from a scheduled report, and are stored in their own dedicated tab. Custom (user) templates and system templates are separated into their own views.
Report data, whether scheduled or ad-hoc, can be downloaded in CSV or JSON format at any time.
Report schedules can be easily reconfigured or removed after their creation.
Individual report items can be removed.
In addition to the above, performance improvements have been made to the report generation process.
Note: In previous versions of the UI, users could create reports using entities known as queries, which were stored filter sets. These sets could be associated with one or more schedules, each containing multiple result items. In the new reports UI, the concept of queries within the Reports Manager has been replaced by storing individual reports under Saved Reports. Therefore, migrating to version 5.0.0 will have the following effects:
Queries that contain schedules will be converted into multiple reports—one for each schedule—with their associated result entries displayed when the report item is expanded.
Queries that do not contain schedules will be turned into custom templates.
The Fix Observed Date is now displayed within the Vulnerabilities tab of the Images view. This date, which is the date Anchore observed a fix being available for a given vulnerability, is also included in the reports where applicable.
Clicking the View Reports button in either the Images or Vulnerabilities views will take you directly to the Saved Reports tab in the Reports view. Here, you can view all reports containing data for the selected image or vulnerability.
Minor improvements have been made to the display of summary data in the rule composition dialog of the Policy Editor.
Service logging has been enhanced to provide information about connections made from the web service to the Anchore Enterprise API services. This information is displayed at the DEBUG level.
There’s a more comprehensive presentation of error details when errors are logged and displayed in the UI.
A new image-developer RBAC role has been added, which is applied to the rule-sets for the UI features. This role is intended for users who need to view images, vulnerabilities, policies, and policy evaluations, but do not need to create or edit them.
Fixes
AppDB database migrations will not execute unless the app is connected to a running instance of Anchore Enterprise services.
The application tour dialog no longer redirects users to the Dashboard view when displayed.
Logging in will now present the user with a landing page appropriate for their RBAC role.
Textual references to Anchore Engine have been replaced with Anchore Enterprise.
An error will now be displayed if a user attempts to submit a repository that has already been analyzed.
The issue where the UI sometimes did not update to reflect a logout event (even though the event was executed on the server) has been addressed.
Notification endpoints that have been disabled by an administrator can no longer be selected in the Action Workbench feature of the Artifact Analysis view.
Security enhancements have been made to the test connection operation within the Notifications view.
Package size is now accurately displayed in the Package Detail popup within the Vulnerabilities view of Artifact Analysis.
Multi-select and clear-all operations now function correctly in both the Events view and the Images view of Artifact Analysis when viewing repositories.
Dashboard metrics now use inclusive terminology.
Broken links to documentation in the Malware subtab of the Content view of Artifact Analysis have been addressed.
Various supporting libraries have been updated to improve security and performance, and to remove deprecation warnings from both browser and server output logs. Redundant libraries have been removed to reduce the application’s startup time and overall size.
Fixes an issue with the V1 Schema that prevented a JIRA Notification Endpoint from being configured via the UI.
Addresses an issue with the RedHat vulnerability data provider not automatically updating OVAL files which prevents getting accurate fix version information for appstream packages in RHEL 9.
Addresses an issue with grype-db matching logic for RHEL 9, where they are no longer reporting a modularity, resulting in false positives. Specifically, RHEL 9’s default stream no longer reports a modularity.
UI Updates
Fixes
The control used to test notifications in the Notifications view would
return a 400 error when used. Now fixed.
In previous versions, the Report Manager would not display report output
on account of an internal page-redirection condition that was an artifact of
pre-release 5.0 testing. This issue has now been addressed.
Previously, the form that allowed you to edit a JIRA notification was not
displaying the required fields. This issue has now been addressed (as
described by the fix in the Enterprise Service Updates section above).
Recommended Component Versions
Component
Recommended Version
Enterprise
v4.9.5
Enterprise UI
v4.9.1
Enterprise Helm Chart
v1.0.4
Engine Helm Chart (Deprecated)
v1.28.7
AnchoreCTL (V1 API Compatible)
v1.8.0
AnchoreCTL (V2 API Compatible)
v4.9.0
anchore-k8s-inventory
v1.1.1
anchore-ecs-inventory
v1.2.0
KAI (Deprecated)
v0.5.0
Kubernetes Admission Controller
v0.4.0
REM - Remote Execution Manager (Deprecated)
v0.1.10
Harbor Scanner Adapter
v1.1.0
Jenkins Plugin
v1.1.0
12.1.29.2 - Anchore Enterprise Release Notes - Version 4.9.4
Anchore Enterprise v4.9.4
Anchore Enterprise release v4.9.4 contains targeted fixes. No database upgrade is needed.
Note
Please view the details around the upcoming Enterprise v5.0.0 release. Important requirements must be met before upgrade. See link below.
Fix an issue that prevented new Windows OS containers from being analyzed properly.
The report vulnerabilitiesByKubernetesContainer executes correctly even when node information is not present.
Prevent a deadlock when two agents are reporting inventory from the same Cluster/Namespace.
Fix a grypedb digest mismatch that can occur when Policy Engine syncs with the Feed Service. The issue is seen as a ChecksumMismatchError in the policy-engine’s logs and results in a failure to sync the feeds.
Fix an exception in the Report Service when loading an image with an empty dockerfile_mode.
Ability to execute a software downgrade from a patch release to a release within the same Major.Minor version numbers. Also provides better log messages during upgrade.
Image digests will now match when an image is analyzed within Enterprise (centralised analysis) and the image SBOM is imported via AnchoreCTL (distributed analysis).
Recommended Component Versions
Component
Recommended Version
Enterprise
v4.9.4
Enterprise UI
v4.9.0
Enterprise Helm Chart
v1.0.2
Engine Helm Chart (Deprecated)
v1.28.4
AnchoreCTL (V1 API Compatible)
v1.8.0
AnchoreCTL (V2 API Compatible)
v4.9.0
anchore-k8s-inventory
v1.1.1
anchore-ecs-inventory
v1.2.0
KAI (Deprecated)
v0.5.0
Kubernetes Admission Controller
v0.4.0
REM - Remote Execution Manager (Deprecated)
v0.1.10
Harbor Scanner Adapter
v1.1.0
Jenkins Plugin
v1.1.0
12.1.29.3 - Anchore Enterprise Release Notes - Version 4.9.3
Anchore Enterprise v4.9.3
Anchore Enterprise release v4.9.3 contains targeted fixes. No database upgrade is needed.
Note
Please view the details around the upcoming Enterprise v5.0.0 release. Important requirements must be met before upgrade. See link below.
Resolved a memory consumption issue in the Policy Engine Service that could occur when handling images with multiple vulnerabilities without fixes. This fix addresses the issue identified during vulnerability scanning, ensuring more efficient resource usage.
Recommended Component Versions
Component
Recommended Version
Enterprise
v4.9.3
Enterprise UI
v4.9.0
Engine Helm Chart
v1.28.3
AnchoreCTL (V1 API Compatible)
v1.8.0
AnchoreCTL (V2 API Compatible)
v4.9.0
anchore-k8s-inventory
v1.1.1
anchore-ecs-inventory
v1.2.0
KAI (Deprecated)
v0.5.0
Kubernetes Admission Controller
v0.4.0
REM - Remote Execution Manager (Deprecated)
v0.1.10
Harbor Scanner Adapter
v1.2.0
Jenkins Plugin
v1.1.0
12.1.29.4 - Anchore Enterprise Release Notes - Version 4.9.2
Anchore Enterprise v4.9.2
Anchore Enterprise release v4.9.2 contains targeted fixes. No database upgrade is needed.
Note
Please view the details around the upcoming Enterprise v5.0.0 release. Important requirements must be met before upgrade. See link below.
Improved the efficiency of Vulnerability Scans. Slower scan time has been linked to the policy-engine service hitting Out of Memory conditions under increased load.
Fixed the Tag Drift Policy Gate which was failing on images analyzed by Enterprise v4.9.0 or later.
Restored the Runtime Inventory Image TTL setting which keeps only the most recent set of inventory per namespace.
Improved the memory profile and reduced memory usage for all the services of Anchore Enterprise.
V2 API Fixes
POST /v2/images - prevent deprecated fields from being accepted in the V2 API
GET /v2/images/{image_digest}/check - returns both the overall final_action, which includes the result of the allow/deny lists, as well as the policy_action of the policy rule evaluation.
GET /v2/subscriptions - when the subscription_type is repo_update, now returns the subscription_value data in the V2 API format.
POST /v2/subscriptions - when the subscription_type is repo_update, prevents non-valid json to be added via the subscription_value data.
GET /v2/subscriptions/{subscription_id} - when the subscription_type is repo_update, now returns the subscription_value data in the V2 API format.
PUT /v2/subscriptions/{subscription_id} - when the subscription_type is repo_update, prevents non-valid json to be added via the subscription_value data.
Evaluation Details field of Policy Evaluations will contain a policy_action field. This field represents the policy result before applying image allow/deny lists
Recommended Component Versions
Component
Recommended Version
Enterprise
v4.9.2
Enterprise UI
v4.9.0
Engine Helm Chart
v1.28.1
AnchoreCTL (V1 API Compatible)
v1.8.0
AnchoreCTL (V2 API Compatible)
v4.9.0
anchore-k8s-inventory
v1.1.1
anchore-ecs-inventory
v1.2.0
KAI (Deprecated)
v0.5.0
Kubernetes Admission Controller
v0.4.0
REM - Remote Execution Manager (Deprecated)
v0.1.10
Harbor Scanner Adapter
v1.2.0
Jenkins Plugin
v1.1.0
12.1.29.5 - Anchore Enterprise Release Notes - Version 4.9.1
Anchore Enterprise v4.9.1
Anchore Enterprise release v4.9.1 contains targeted fixes. No database upgrade is needed.
Note
Please view the details around the upcoming Enterprise v5.0.0 release. Important requirements must be met before upgrade. See link below.
Fixes loading all bundles in the <service_dir>/bundles directory of the API service when new accounts are created`.
Vulnerability records will be created for out-of-support entries from the RHEL Provider when none of the in-support versions are affected.
Addressed a failure with grypedb syncing with Policy Engine.
Enterprise now properly handles the error case where a vulnerability provider fails to run. An example of a provider failing to run - GitHub Provider will fail if the Github API Key was not properly provided in the config.
Recommended Component Versions
Component
Recommended Version
Enterprise
v4.9.1
Enterprise UI
v4.9.0
Engine Helm Chart
v1.27.2
AnchoreCTL (V1 API Compatible)
v1.8.0
AnchoreCTL (V2 API Compatible)
v4.9.0
anchore-k8s-inventory
v1.1.1
anchore-ecs-inventory
v1.1.0
KAI (Deprecated)
v0.5.0
Kubernetes Admission Controller
v0.4.0
REM - Remote Execution Manager (Deprecated)
v0.1.10
Harbor Scanner Adapter
v1.2.0
Jenkins Plugin
v1.0.25
12.1.29.6 - Anchore Enterprise Release Notes - Version 4.9.0
Anchore Enterprise v4.9.0
Anchore Enterprise release v4.9.0 contains targeted fixes and improvements.
A Database update is needed.
Note
Please view the details around the upcoming Enterprise v5.0.0 release. Important requirements must be met before upgrade. See link below.
Anchore Enterprise V2 API is now available for use.
The V2 API has been provided for early adoption for any customer who has custom integrations or scripts that may directly access the V1 API. This will provide extra time to migrate to the new V2 API endpoints prior to the official Enterprise v5.0.0 release.
The V1 APIs were distributed across several files and have now been consolidated into the single V2 API.
Anchore API Swagger
The following V1 APIs have been deprecated:
Enterprise API Swagger
Engine API Swagger
Notifications Swagger
RBAC Manager Swagger
Reports Swagger
For more details about the Anchore Enterprise V2 API, and to view the V2 swagger, please visit API Usage
Kubernetes and ECS Runtime Inventory ingest path received performance enhancements.
Reports
Scheduled Queries now provide a executionsLimit filter
Improvement in both performance and memory consumption were completed on the following reports:
Vulnerabilities by Kubernetes Namespaces
Vulnerabilities by Kubernetes Containers
Vulnerabilities by ECS Containers
Added several new Metrics within the report service. These are now available via Prometheus.
Configuration
Image import maximum size is now configurable. Current default size is 100 MB.
Docker Compose users can set the environment variable ANCHORE_MAX_IMPORT_CONTENT_SIZE_MB
Helm users can modify max_import_content_size_mb
Source repository import maximum size is now configurable. Current default size is 100 MB.
Docker Compose users can set the environment variable ANCHORE_MAX_IMPORT_SOURCE_SIZE_MB
Helm users can modify max_source_import_size_mb
Provided a configuration option to bypass object store content checks. This was provided to aid our customer support team during specific triage. Please contact customer support for additional information.
Policy Engine can now capture and persist additional metadata for vulnerabilities reported by the vulnerability provider sync. The following observed dates are persisted:
The date on which a vulnerability within a provider namespace is first observed by Enterprise via the vulnerability provider sync.
The date on which a specific package fix is first observed by Enterprise via the vulnerability provider sync. This “fix observed date” will be used during policy eval of max days since fix to give a more consistent evaluation result across all newly analyzed image and source SBOMs.
Support capture of vulnerability data for Ubuntu 23.04 (Lunar Lobster) and Ubuntu 23.10 (Mantic Minotaur) once publishing commences from Canonical.
Provide support for vulnerability data for Mariner.
If a Vunnel Provider fails, the system will provide a new sync using the previous data for the failing provider and the new data from the other providers. This change also provides improved messaging around failing providers.
Improved Java matches for Source SBOMS by capturing more metadata during SBOM imports.
Fixes
Reports
Handle an error when the service is loading data for ECS Container Report Table and Kubernetes Container Report Table in cases where a container stops being reported long enough for it to be removed from the Catalog, and is then reported again.
The report service no longer triggers an out of memory error when running larger runtime workloads.
The Archive Image Delete force flag options now works even when the image is in the archiving state.
ECS Inventory which contains both tasks as part of a service and tasks that are run standalone will be properly accepted.
Fixed an issue seen with the Ubuntu provider failing to sync when the git repo has untracked files present.
Addressed an issue where distroless images reported incorrect findings from other catalogers.
Correctly handled the Ubuntu CVE Tracker change for labeling which indicated end of life. This could lead to unfixed CVEs to be missing from the data.
Modifying the value of the Catalog’s resource_metrics cycle timer is now honored.
API call POST /v1/enterprise/stateless/sbom/vuln/{vtype} now works as expected.
Proper handling for vulnerability transitions from affected to not-affected within the RHEL provider.
UI Updates
Fixes
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
Recommended Component Versions
Component
Recommended Version
Enterprise
v4.9.0
Enterprise UI
v4.9.0
Engine Helm Chart
v1.27.0
AnchoreCTL (V1 API Compatible)
v1.8.0
AnchoreCTL (V2 API Compatible)
v4.9.0
anchore-k8s-inventory
v1.1.1
anchore-ecs-inventory
v1.1.0
KAI (Deprecated)
v0.5.0
Kubernetes Admission Controller
v0.4.0
REM - Remote Execution Manager (Deprecated)
v0.1.10
Harbor Scanner Adapter
v1.2.0
Jenkins Plugin
v1.0.25
12.1.29.7 - Anchore Enterprise Release Notes - Version 4.8.1
Anchore Enterprise v4.8.1
Anchore Enterprise release v4.8.1 contains a new configuration option. No database upgrade is needed.
Note
Please view the details around the upcoming Enterprise v5.0.0 release. Important requirements must be met before upgrade. See link below.
Vulnerabilities by Kubernetes Containers is a new report template which will allow you to view and filter on
vulnerabilities found within a Kubernetes Container. The report will populate only if you have deployed the new anchore-k8s-inventory.
Vulnerabilities by ECS Containers is a new report template which will allow you to view and filter on
vulnerabilities found within an ECS Container. The report will populate only if you have deployed the new anchore-ecs-inventory.
Vulnerabilities by Kubernetes Namespace report now displays the AnchoreAccount Name.
Configuration
A new configuration option is available that can show a significant reduction in resource usage. It is available for
customers that do not use the /v1/query/images/by_vulnerability API.
Setting this configuration option to false will:
Disable the /v1/query/images/by_vulnerability API and return a 501 code if called.
Disable the SBOM vulnerability rescans which occur after each feed sync. It is these rescans that populate the data returned by the API.
Customers who are using /v1/query/images/by_vulnerability API, are encouraged to switch to calling the
ImagesByVulnerability query in the GraphQL API. This query provides equivalent functionality and will allow you to
benefit from this new configuration option.
Docker Compose users can set environment variable, ANCHORE_POLICY_ENGINE_ENABLE_IMAGES_BY_VULN_QUERY, in the policy engine to false.
Helm users can set services.policy_engine.enable_images_by_vulnerability_api key in config.yaml
Fixes
Improved operating system matching prior to determining if a CVE should be reported against an image.
CVSS Scores from NVD are now preferred over other source. This provides a more consistent end user experience.
Addressed a failure to properly generate the Policy Compliance by Runtime Inventory report while using the new anchore-k8s-inventory agent. A symptom was that the Compliance and Vulnerabiliy Count fields within the Kubernetes tab remained in Pending state.
Switch archive delimiter in malware scan output from ‘!’ to ‘:’ to ensure shell copy-paste ease of use.
Improved a few misleading internal service log messages.
Fixed an issue that resulted in a scheduled query, with a qualifying filter, failing to execute. Examples of filters which will result in this failure:
Query Name
Filter Name
Tags by Vulnerability
Vulnerability LastTag Detected In Last
Images Affected By Vulnerability
Vulnerability LastTag Detected In LastImage Analyzed In Last
Artifacts By Vulnerability
Vulnerability LastTag Detected In Last
Policy Compliance History by Tag
Tag Detected In LastPolicy Evaluation Latest Evaluated In Last
Policy Compliance by Runtime Inventory Image
Policy Evaluation Latest Evaluated In Last
Runtime Inventory Images by Vulnerability
Vulnerability LastImage Last Seen In
Unscanned Runtime Inventory Images
Last Seen In
UI Updates
The Watch Repository toggles displayed in the registry and repository view tables under Images can now be suppressed when the enable_add_repositories property in config-ui.yaml is set to False for admin or standard accounts. This and other parameters contained in the UI configuration file are described
here.
The Vulnerabilities by ECS Container report template has been added that allows you to search for a specific vulnerability across ECS containers in order to view a list of clusters services, tasks and containers that are impacted by the vulnerability.
The Vulnerabilities by Kubernetes Container report template has been added that allows you to search for a specific vulnerability across Kubernetes containers in order to view a list of clusters services, tasks and containers that are impacted by the vulnerability.
Fixes
References to Anchore Engine have been removed and replaced
app-wide with Anchore Enterprise Services
A fix has applied for an issue where a read-only user was not able to manage registry credentials in another context even when they had a full-control role associated with that account
An Account Name filter has been added to the Kubernetes Runtime Vulnerabilities by Namespace report template, and improved descriptions have been provided for the Label and Annotations filters
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
Recommended Component Versions
Component
Recommended Version
Enterprise
v4.8.0
Enterprise UI
v4.8.0
Helm Chart
v1.26.0
AnchoreCTL
v1.7.0
anchore-k8s-inventory
v1.0.0
anchore-ecs-inventory
v1.0.0
KAI (Deprecated)
v0.5.0
Kubernetes Admission Controller
v0.4.0
REM (Remote Execution Manager)
v0.1.10
Harbor Scanner Adapter
v1.0.1
Jenkins Plugin
v1.0.25
12.1.29.9 - Anchore Enterprise Release Notes - Version 4.7.1
Anchore Enterprise v4.7.1
Note
Please view the details around the upcoming Enterprise v5.0.0 release. Important requirements must be met before upgrade. See link below.
Please Note: If you are upgrading from an Anchore Enterprise version prior to v4.2.0, there is a known issue that will require you to upgrade to v4.2.0 or v4.3.0 first. Once completed, you will have no issues upgrading to v4.7.0. Please contact Anchore Support if you need further assistance.
Enterprise Service Updates
Improvements
Runtime Inventory
Anchore has introduced two new Runtime Inventory Agents for use with the v4.7.0 release of Anchore Enterprise.
anchore-k8s-inventory and anchore-ecs-inventory will provide better access to your runtime environments.
See Kubernetes Runtime Inventory and ECS Runtime Inventory for more details.
Runtime Inventory TTL was also improved to be more effective in helping you to manage expired inventory items.
Reporting
Vulnerabilities by Kubernetes Namespace is a new template which will allow you to view and filter on
vulnerabilities found within a Kubernetes Namespace. The report will populate only if you have deployed the new anchore-k8s-inventory.
Feeds
Anchore Enterprise is now fully integrated with our Open Source applications of anchore/vunnel and anchore/grype-db.
Chainguard Linux Vulnerability Provider has been added to the list of feeds.
Support for the OVAL v2 RHEL Security Endpoint.
Account email field is now editable via API.
Vulnerability Package trigger, adds a new parameter that controls the behavior of vulnerabilities found in the base image. The new parameter can be set to trigger on vulnerabilities in the base image, trigger on vulnerabilities that are not in the base image, or to trigger only on vulnerabilities present in the base image.
Container Image SBOM generation and import from AnchoreCTL without the need for Syft
Combined with AnchoreCTL 1.6.0, you can now analyze images fully using AnchoreCTL and import the results to Enterprise, including secret scans, filesystem metadata analysis, content searches and file retrieval with equivalent functionality to what Enterprise-backend analysis scans produce. The only exception is that malware scanning is not supported by AnchoreCTL-based analysis.
Fixes
Enabling the Repo Watcher when there is already an image from the repo with an active subscription, no longer returns an error.
Adding a source sbom which does has java packages without a metadata virtual path is handled correctly.
Addressed an issue where Anchore Enterprise displayed multiple Binary Package Locations.
Correctly handle an import of an image sbom which contains packages with no metadata.
Improved handling of the Microsoft Windows product id during analysis of Windows containers.
UI Updates
Fixes
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
Recommended Component Versions
Component
Recommended Version
Enterprise
v4.7.0
Enterprise UI
v4.7.0
Helm Chart
v1.25.0
AnchoreCTL
v1.6.0
anchore-k8s-inventory
v1.0.0
anchore-ecs-inventory
v1.0.0
KAI (Deprecated)
v0.5.0
Kubernetes Admission Controller
v0.4.0
REM (Remote Execution Manager)
v0.1.10
Harbor Scanner Adapter
v1.0.1
Jenkins Plugin
v1.0.25
12.1.29.11 - Anchore Enterprise Release Notes - Version 4.6.0
Anchore Enterprise 4.6.0
Anchore Enterprise release v4.6.0 contains targeted fixes and improvements.
A Database update is needed.
Note
Please view the details around the upcoming Enterprise v5.0.0 release. Important requirements must be met before upgrade. See link below.
Please Note: If you are upgrading from an Anchore Enterprise version prior to v4.2.0, there is a known issue that will require you to upgrade to v4.2.0 or v4.3.0 first. Once completed, you will have no issues upgrading to v4.6.0. Please contact Anchore Support if you need further assistance.
Enterprise Service Updates
Improvements
Runtime Inventory
New API Delete functionality for any runtime inventory context that is no longer being reported on by KAI.
/enterprise/inventories DELETE
The Inventory Watcher improved logging output at info level so that it is more concise.
The Inventory Watcher now contains additional global metrics
anchore_monitor_inventory_contexts_monitored_total - Total number of contexts monitored via subscriptions
anchore_monitor_inventory_images_total ( found ) - Total number of images from runtime inventory that are being watched
anchore_monitor_inventory_images_total ( success ) - Total number of images successfully added to the catalog
anchore_monitor_inventory_images_total ( fail ) - Total number of images that failed to be added to the catalog
Policy Triggers
Vulnerability Package Trigger has a new parameter inherited from base. It provides more control on which vulnerabilities will be considered by the policy.
true shows vulnerabilities only inherited from the base image
falsehides vulnerabilities inherited from the base image
We have deprecated various triggers using blacklist and whitelist terminology in favor of denylist and allowlist.
The deprecated triggers will continue to work until they are removed in Enterprise v5.0.0. Note that existing allowlist
entries for the deprecated triggers will continue to work until the policy is updated to use the new triggers at
which time the trigger IDs will no longer match.
Analysis Jobs
Improves the ability of the system to re-queue image analysis and image import jobs from shut-down analyzers to minimize the impact of
scale-down operations on the set of analyzers. In addition to the existing analyzing state timeout behavior, the system can
now detect an image was analyzed by a now-down analyzer as soon as the analyzer is reported as down, making the re-queue time
a matter of minutes instead of hours.
Additional metrics were also added to help give more visibility into analysis
anchore_analyzer_status ( waiting ) - Analyzer is idle and is waiting to receive work from the queue
anchore_analyzer_status ( error ) - Analyzer is not able to process work
anchore_analyzer_status ( processing ) - Analyzer is currently processing work
anchore_analyzer_dequeue_latency - Indicator of the responsiveness of the queue service for this analyzer
Fixes
Fixed an SSL Error for customers who are using custom certificates.
Resolved problems in the Inventory Watcher when processing large inventories.
Policy validation has been improved during initial creation of the policy bundle. This will provide a better feedback mechanism so that invalid policies can be fixed earlier.
Addressed an issue where the python binary cataloger incorrectly returned multiple instances of a python package.
UI Updates
Fixes
Deprecated policy triggers
A new warning indicator has been added to the policy rule list to flag triggers that are invalid or that have been deprecated. If you edit a policy rule containing a deprecated trigger, we also indicate that the currently selected trigger has been deprecated and replaced by another trigger, so that it is easy to know how to fix policies containing such triggers.
Policy editor tables
We have upgraded the table widgets within the policy editor to make the columns resizable.
Miscellaneous
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
12.1.29.12 - Anchore Enterprise Release Notes - Version 4.5.0
Anchore Enterprise 4.5.0
Anchore Enterprise release v4.5.0 contains targeted fixes and improvements.
There is no Database update needed.
Enterprise Service Updates
Improvements
Introducing a new RBAC Role called report-admin. This is meant to be a companion role for users that need to work with scheduled queries but do not have other write permissions.
Anchore Enterprise is now using Red Hat Universal Base Image 9 Minimal as its base image.
This significantly reduces the number of packages provided
by the operating system, thus reducing the vulnerability surface overall.
Fixes
Fixed an issue that prevented image metadata from being correctly displayed for images with unsupported packaging systems (Arch Linux, etc).
Properly identifies alpine:edge when evaluating the vulnerabilities.vulnerability_data_unavailable trigger.
Fixed an issue that allowed admin users to perform some operations against non-existent accounts.
Database upgrades now succeed from Anchore Enterprise Releases older than v4.2.0.
Users who only have read-only permissions are correctly prevented from creating, updating and deleting scheduled report queries.
Deprecation Reminders
The anchore-cli has been deprecated and will be removed from the docker.io/anchore/enterprise image during the v5.0.0 Release.
AnchoreCTL is the only supported command line tool for interacting with Anchore Enterprise.
UI Updates
Improvements
Artifact Analysis
In addition to our native JSON format, the Artifact Analysis
view now allows Software Bill of Material (SBOM) data to be
downloaded in both the Software Package Data Exchange (SPDX) format
and the OWASP CycloneDX format.
The table in the Vulnerabilities tab now contains a Detected At
column that indicates the analysis discovery time of the
vulnerability. This data is now also present in the downloadable
report data for this view.
Policy Editor
The Policy Editor dialog now displays any rules that contain
invalid or obsolete triggers in its summary table. These rules are
similarly highlighted when the rule is edited for easy removal.
Reports
From within the administrative account, both the Quick Reports
and Report Manager controls now allow you to preview and
retrieve report data from either the local account or from all
accounts system-wide.
A new template has been added to our current set of system templates
that surfaces policy compliance data against runtime inventory
artifacts.
Additional fields have been added to our existing system templates:
Vulnerability-related templates now include a links field
Runtime-related templates now include an account field
All templates now include an inventory_type field
Note: In order to surface these fields, new queries must be
created using these updated system templates as their basis—they
will not be present in any existing stored queries.'
Fixes
System: User Management
Prior to this fix, updates to the user list would be inaccurate
if a user was created by another user with full-control privileges
from a switched account context. Now addressed.
Logging
A minor issue has been addressed whereby active users that had their
accounts deleted, or resided within an account that was disabled,
would not be correctly logged after this event.
Policy Manager: Rules
Gate rules created for Source artifacts will now only display the
triggers associated with that artifact type. Prior to this fix, the
entire set of triggers (for both Source and Image types) were
shown in the dropdown.
Report Manager: RBAC
Access control restrictions for report management operations have
now been applied throughout this feature. The creation, management,
and deletion of report schedules and their associated items are now
gated by the RBAC roles associated with the reports service.
Application Architecture
The Anchore Enterprise UI is now provisioned using Red Hat Universal Base Image 9 Minimal.
This image significantly reduces the number of packages provided
by the operating system, thus reducing the vulnerability surface
overall.
System: Login
Addressed an issue whereby logging in via an external IDP, as
opposed to the SSO link on the Enterprise UI login page, would fail
under certain circumstances.
Miscellaneous
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
12.1.29.13 - Anchore Enterprise Release Notes - Version 4.4.1
Anchore Enterprise 4.4.1
Anchore Enterprise release v4.4.1 is a patch release which solely addresses a critical vulnerability in the ClamAV malware scanner (CVE-2023-20032).
The malware scanner is not enabled by default within Anchore Enterprise. If you have not enabled the malware scanner you are not exposed to this vulnerability.
Please Note: If you are upgrading from an Anchore Enterprise version prior to v4.2.0, there is a known issue that will require you to upgrade to v4.2.0 or v4.3.0 first. Once completed, you will have no issues upgrading to v4.4.1. Please contact Anchore Support if you need further assistance.
12.1.29.14 - Anchore Enterprise Release Notes - Version 4.4.0
Anchore Enterprise 4.4.0
Anchore Enterprise release v4.4.0 contains targeted fixes and improvements.
A Database update will be required.
Please Note: If you are upgrading from an Anchore Enterprise version prior to v4.2.0, there is a known issue that will require you to upgrade to v4.2.0 or v4.3.0 first. Once completed, you will have no issues upgrading to v4.4.0. Please contact Anchore Support if you need further assistance.
Enterprise Service Updates
Improvements
The AnchoreCTL binary for linux x86 is now packaged into the docker.io/anchore/enterprise image for use via direct
’exec’ invocation or to copy from the image into your environment without having to access external networks. The
packaged binary will be the current release of AnchoreCTL at the time of release of Enterprise.
Configuration Options
enable_package_db_load is a new configuration option that will allow users to disable the use of the
package.verify policy trigger. Disabling this trigger, will prevent further additions in the image_package_db_entries
table, which will reduce load on the database. In addition, users may now safely delete the existing entries in the
table and reclaim database capacity usage. See Database for more details.
A new option for users to specify the endpoint used for the Ubuntu Feed Driver. See
Feeds for more information.
Enterprise API now supports the ability to download SBOMs in SPDX Format and CycloneDX Format.
/images/{imageDigest}/sboms/spdx-json
/images/{imageDigest}/sboms/cyclonedx-json
/sources/{source_id}/sbom/spdx-json
/sources/{source_id}/sbom/cyclonedx-json
A new Image Ancestry Policy Gate has been added. Allows the user to verify that a specified image contains an
approved base image. See Policy Checks
for complete details.
Binary detection is now consistent between uploaded SBOMs generated by AnchoreCTL and SBOMs generated by the
backend Enterprise Service.
Tech Preview: Enterprise Reporting provides a global endpoint which will allow administrators to generate queries that
will include data from all accounts. See Reports from more details.
Vulnerability data is now available from Feed Group - Ubuntu 22.10 (Kinetic Kudu).
Fixes
Vulnerability feed group information is now populated at time of switch-over. This should address issues with
displaying the vulnerability group record counts in systems with a large number of active images.
Addressed an error path exception in the Application Version Vulnerability API.
Addressed a parsing issue within the execution of Policy Gate retrieved files with Trigger content regex.
Schedule Report generation will gracefully handle an error found in a report and continue with the generation of other reports.
Properly account for rolling distros (currently Wolfi) when evaluating the vulnerabilities.vulnerability_data_unavailable trigger.
Addressed an analysis failure during SBOM generation for certain images with cycles of soft links.
Deprecation Reminders
The anchore-cli python client has been deprecated as of Enterprise Release
v4.2.0. It will be removed from the docker.io/anchore/enterprise image during the v4.5.0 Release.
AnchoreCTL is the only supported command line tool for interacting with Anchore Enterprise.
UI Updates
Improvements
Reporting
Prometheus logging has now been added to the application with the
following data now being captured and reported:
General node process metrics
Number of active sessions
Count of HTTP requests, split by endpoint and status code
HTTP request duration
Latency of each service, calculated during every health check cycle
Configuration
Sessions will now preferentially use OAuth tokens to provide and maintain
ongoing authentication state
In the event that the token dispensation is not possible when the user
logs in, the system will fall back to using basic authentication
Added confirmation toast on dashboard widget creation
Updated various supporting libraries to improve security and performance
Redundant libraries have been removed to reduce the app startup time and
overall size
Fixes
Reporting
Schedules that contain queries with data enumerations can now be saved
properly
Configuration
Increased the opacity on both the filter input and the sort dropdown when
either are in a disabled state-that is, when no application data exists
This ensures the text is legible while still clearly indicating the
inactive state
Policy Editor Evaluation correctly updates after changes are applied
RBAC rules are now correctly applied to the radio button used to change
the active bundle
Copy bundle modal would now allow all key events
Service errors and access control errors are now properly articulated
12.1.29.15 - Anchore Enterprise Release Notes - Version 4.3.0
Anchore Enterprise 4.3.0
Anchore Enterprise release v4.3.0 contains targeted fixes and improvements.
A Database update will be required.
Enterprise Service Updates
Improvements
Reporting Improvements
The runtimeInventoryImagesByVulnerability report query now supports
various vulnerability filters such as Vulnerability Id.
Various vulnerability-related report queries, such as artifactsByVulnerability,
tagsByVulnerability, now support filtering by one or more severities
via the Severities option.
A new report query called runtimeInventoryUnscannedImages is now available. It provides the list of images
in the runtime inventory that have not been analyzed.
Introducing a new RBAC Role called repo-analyzer. It is meant to be a companion to the image-analyzer role and
specifically provides the ability to create a repository subscription.
Now importing the Wolfi Security Feed. Used in vulnerability matching for Wolfi OS Packages.
Fixes
Fixed a failure during the cleanup of old versions of GrypeDB. This was seen to cause an issue during feed sync.
When deploying with multiple instances of policy-engine, there will only be a maximum of two GrypeDB instances.
Addressed an issue which prevented a scheduled query of a Runtime Inventory Images By Vulnerability from running.
Fixed the unlikely condition where a deleted image is added back into the system, due to a subscription processing error.
Image analysis properly displays all found versions of the same OS package.
Increased accuracy of vulnerability matches on Debian source packages when the source package version differs
from the binary package version. Requires re-analysis in order to populate necessary metadata for existing scans.
Identifies improper SSO IDP Configuration during creation or modification of an existing configuration.
Deprecation Reminders
The anchore-cli python client has been deprecated as of Enterprise Release
v4.2.0. It will be removed from the Enterprise image during the v4.4.0 Release.
AnchoreCTL is the only supported command line tool for interacting with Anchore Enterprise. It will be included in the Enterprise image during the v4.4.0 Release.
UI Updates
Improvements
A new Quick Report for Unscanned Runtime Inventory Images is now available.
It shows which images running in Kubernetes clusters have not yet been
analyzed by Anchore so that users can verify all images are scanned in CI/CD.
The Runtime Inventory Images by Vulnerability report type now supports
various vulnerability filters such as Vulnerability Id. This makes it
easier to focus efforts on zero-days (or other critical and well-known
vulnerabilities) and find exactly which runtime contexts (and the images
within) are impacted by a specific vulnerability.
Various vulnerability-related reports (Artifacts by Vulnerability,
Tags by Vulnerability, etc.) now support filtering by one or more severities
via the Vulnerability Severities option.
An improvement has been made to our cookie management for higher entropy via
an autogenerated encryption key unique to each deployment and to allow
administrators to change it if they wish.
Fixes
Fixed a bug causing logins made directly via an IDP, as opposed to the SSO
link on the Anchore login page, to fail with a 404 error.
Improved fault-tolerance in the event of an invalid or malicious websocket
request: using a scanner such as Nessus could under certain conditions lead to
an application crash.
Fixed a routing issue causing requests to /artifacts/image/ with a trailing
slash to lead to a 404 page not found error.
Various supporting libraries have been updated in order to improve security,
performance, and also to remove deprecation warnings from browser and server
output logs. Redundant libraries have been removed to reduce the app startup
time and overall size.
12.1.29.16 - Anchore Enterprise Release Notes - Version 4.2.0
Anchore Enterprise 4.2.0
Anchore Enterprise release v4.2.0 contains targeted fixes and improvements. A Database update will be required.
Enterprise Service Updates
Improvements
SSO feature enhancements includes
The ability for an Anchore administrator to create another user in the admin account who will authenticate using SSO/SAML enabling use of 2FA and other SSO security mechanisms.
A strict mode which will require SAML users to be configured in Anchore Enterprise prior to user login as an alternative to the existing behavior that creates Anchore users at login time only. This allows administrators to restrict login access for SSO users to only those users specifically allocated by the Anchore admin.
See [Configuring SSO](https://docs.anchore.com/current/docs/configuration/user_authentication/sso/ for additional information about SSO.
Adds detection of non-packaged node.js binaries during image analysis to support sbom and vulnerability scanning.
The Reporting Service now offers the ability to show and filter on vulnerabilities that the vendor of an image distribution either disagrees with or has decided not to fix. This matches the ‘vendor_only’ filtering behavior of the vulnerability APIs and AnchoreCTL.
Fixes
Fixed an issue where the analysis queue processing stops. This was seen in environments with multiple Catalog, Policy Engine, and Analyzer Containers running.
Populates fix information per module for rpm-based feeds such as oracle, rhel, and centos. The rpm modularity is now taken into account when matching rpm packages to vulnerabilities.
Make RedHat/CentOS AppStream modules fully supported for vulnerability matching with reduced false positives and more accurate fix versions.
Improved error handling during SSO IDP Configuration changes.
During the creation of an SSO default account, the default policy bundles are correctly populated.
Improved error handling in the MSRC feed driver so that invalid records are skipped and processing will continue for other records.
Feature Removal
Removed the Kubernetes Runtime Inventory Embedded mode, and associated cluster configuration APIs. This feature saw limited usage and the same goal can be accomplished by deploying KAI into the cluster directly in inventory mode. See https://docs.anchore.com/current/docs/configuration/runtime_inventory/ for more information about configuring KAI in agent mode.
Deprecation Reminders
The anchore-cli python client will be deprecated as of Enterprise Release v4.2.0. AnchoreCTL will be the only supported command line tool for interacting with Anchore Enterprise.
UI Updates
Improvements
The UI now supports the creation and configuration of administrators who
can authenticate directly using Single Sign-On (SSO). In addition,
administrators in deployments that have been configured to use
exclusionary account assignment by disabling “Just-in-Time” account
provisioning for SSO can now associate specific standard users with an
individual IDP.
For environments where analytical volume is extremely high, the
Kubernetes page now provides an optimized presentational view that
excludes information from the reporting services. This version of the view
can be enabled via the file- or environment-based Enterprise Client
application configuration
parameters.
The Vulnerabilities tab now provides a client-side filter for
Vendor Only CVEs that is enabled by default. When disabled, the
full vulnerability dataset is now displayed. Upon disabling the
filter, a new Will Not Fix column is will be displayed within
the results table.
A Vulnerability Will Not Fix filter has been added to the following base templates in the Scheduled Reports view:
Images With Critical Vulnerabilities
Artifacts by Vulnerability
Tags by Vulnerability
Images Affected by Vulnerability
Fixes
In previous versions, setting a boolean filter to true in
Quick Reports would not get correctly passed to the web service. This
is now fixed.
Users with the policy-editor role should not have access to the
Artifact Analysis view. Although the associated navbar icon was
correctly disabled, users could still access the page (albeit in
read-only mode) directly via the URL. This behavior has now been
addressed.
The Only Show toggles in the Vulnerabilities tab of the Artifact
Analysis view provide a number of filters that can reduce the number of
items displayed. When applied, the table updates accordingly—however,
prior to this fix the graph and vulnerability severity summary counts did
not. This issue has now been addressed.
Prior to this fix, if a user encountered an error when saving a policy,
there was no way for them to fix the error and save the policy again
because the Save button remained disabled. Users can now attend to the
error and save the policy.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been removed to
reduce the app startup time and overall size.
12.1.29.17 - Anchore Enterprise Release Notes - Version 4.1.1
Anchore Enterprise 4.1.1
Anchore Enterprise release v4.1.1 contains targeted fixes and improvements.
Enterprise Service Updates
Improvements
Introduced a Recommendation field that is available in the Policy Rule. This field will be visible in the image policy check results. It will allow the Policy Rule creator to provide a generic hint on how to fix policy findings.
Improved the description of the “max days since creation” parameter within the vulnerability->package rule. It now states, “A grace period, in days, for a vulnerability match to be present after which the vulnerability is a policy violation”.
Fixes
Fixed enterprise database schema upgrade process from 4.0.x to 4.1.0 schema when run on a fips-enabled host.
Improved error message when providing invalid tag to the External API Inventory calls.
Improved error messages around registry access failures.
Improved detection and error handling of an image that contains an empty or unknown distro.
Image analysis will succeed when the image contains an uncompressed layer.
Image analysis will succeed when the image contains un-parsable rpmdb file entries.
On a restart or a manual resync of the feed service, the system will maintain no more than 2 versions of the grype database records.
Tag status is updated immediately in reporting data. Previously, the tag status updates maybe have been delayed.
Deprecation Reminders
The Embedded Inventory Mode Feature, has been deprecated. During the future Enterprise Release v4.2.0, it will be removed.
The anchore-cli python client will be deprecated as of the future Enterprise Release v4.2.0. AnchoreCTL will be the only supported command line tool for interacting with Anchore Enterprise.
UI Updates
Improvements
A Recommendation field has been added to the policy rule editor to allow
policy creators to provide bespoke remediation guidance. This information
will be surfaced within the output for any matched rule within the
Policy Compliance results table in the Artifact Analysis view.
The service log output for the application has been overhauled.
Administrators with access to the running app instance are now able to
view detailed timestamped information—categorized by level—that describes
the routes being accessed, connection and configuration details, and
information about the major operations taking place within the runtime.
Additional logging data will be added in subsequent releases.
Fixes
The management of database connectivity details from the app has been updated to handle special characters in configuration strings.
A Forbidden error is displayed when a non-administrative user tries to
directly access the /system/notifications tab via URL. It also blocks a
fetch for the LDAP configuration details for non-admins.
Various supporting libraries have been updated in order to improve
security and performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been removed to
reduce the app startup time and overall size.
12.1.29.18 - Anchore Enterprise Release Notes - Version 4.1.0
Anchore Enterprise 4.1.0
Anchore Enterprise release v4.1.0 contains targeted fixes and improvements.
A Database update will be required.
4.1.0 Upgrade Notes
The 4.1 upgrade requires a new vulnerability database to be built by the feed service using a new schema. In the time between the new deployment startup and the completion
of the first post-upgrade feed service data sync, the policy engine API will return errors for vulnerability scans. Once it receives the newly built vulnerability database
from the upgraded feed service it will resume normal operation. Depending on the deployment, the data update and new db build may take several hours.
The system will resolve this condition on its own but your maintenance window should take this into account.
Notes for new deployments
Due to improved error handling in the vulnerability scanner (see details below) new deployments will not provide vulnerability reports via the API until the first full vulnerability
sync has occurred but images may be analyzed during this time. Once the first sync is completed (you can see using anchorectl feed list), the vulnerability scans will return successfully.
Enterprise Service Updates
Improvements
Source to Image SBOM Drift
Introduces a new artifact relationship API which provides the ability to indicate that a container image was built from one or more specific source repository revisions. Allowing Anchore to show when the source repository’s SBOM packages are correctly found in the image SBOM.
Introduces a new policy gate and trigger which will raise drift findings in the policy compliance evaluations.
Vulnerability False Positives Reduction
Introduces an Anchore vulnerability feed shown as ‘anchore:exclusions’. This is a curated feed of vulnerability matches which will be automatically excluded from results in order to reduce false positives.
The feed utilizes Version 4 of the Grype database schema which provides support for vulnerability match exclusion data.
Application Name and Version Name improvements
Added uniqueness and non nullable constraints to the following fields:
Application.name must be unique per account
ApplicationVersion.version_name must be unique per application
Attempting to create or update these fields to a non-unique value will result in a 409 error.
During upgrade, if existing records are found to have the same value, they will be automatically renamed by appending ‘_N’ where ‘N’ is incremented for each conflict. For example, if there are two applications named “test” within an account, one will be renamed “test_1”.
Accounts may now be created with a name that contains an underscore (_) as the last character.
Tag subscriptions will now be removed when the last image for a tag is deleted from the system.
Adds last_seen_in_days field to the archival rule exclusion block that allows images to be excluded from archive if there is a corresponding runtime inventory image where last_seen is within the specified number of days
Image Vulnerabilities now provides the timestamp when each vulnerability was detected on the image. This is now available in the API and is indicated with the “detected_at” field.
Reduced the number of Error Events generated when there is an issue accessing the registry. You will now only see on event generated per registry/repo. Previously there would be an event for each image.
In order to reduce vulnerability false positives, it is recommended that users do not attempt vulnerability matches on go main modules with pseudo-version v0.0.0- or (devel) unless the true version has been specified via correction.
Fixes
Subscriptions are now being properly cleaned up when images are deleted or archived.
The API will return a proper error message if the caller attempts to delete an image, using the image ID, that is the latest of its tags and still has an active subscription.
Providing an unsupported vulnerability type for API sources/{source_id}/vuln will result in a proper error message.
Addressed an incorrect error events regarding Image Registry Lookups. This event was generated in error even when registry credentials were valid and the lookup succeeded.
Errors that are detected during a vulnerability scan are now properly reflected in the API. Previously, it was possible that the scan would fail, but it would appear that the image had no vulnerabilities.
Importing an image SBOM where the distro version is NULL or None, will now succeed.
A max-images-per-account Archive Rule will correctly handle an image that has more than one tag associated with it.
Deprecations
The Embedded Inventory Mode Feature has been deprecated as of this release. It will be removed from the Enterprise product during the future release of v4.2.0.
Configuration Variable ‘ANCHORE_VULNERABILITIES_PROVIDER’ is no longer supported by Enterprise.
Configuration Variable ‘ANCHORE_ENTERPRISE_FEEDS_THIRD_PARTY_DRIVERS_ENABLED’ is no longer supported by Enterprise.
Future Deprecations
The anchore-cli Python client will be deprecated as of version 4.2 of Anchore Enterprise. AnchoreCTL contains all of the functionality of anchore-cli and is the default, supported tool for interacting with Anchore Enterprise as of 4.1.
UI Updates
Improvements
In SSL-enabled environments, all requests made from client are automatically upgraded to use a secure connection.
Account entries and user entries are now both permitted to contain spaces in their names. In addition, account names are now permitted to contain a trailing underscore (_) character.
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
Fixes
Single sign-on (SSO) authentication now preserves the page URL. Prior to this update, the user would always be sent to the Dashboard view, but now the original location is used after the SSO authorization round-trip is complete.
Due to a cookie misconfiguration, completion of the SSO round-trip would
not always place the user inside an authenticated view, requiring a page refresh—this issue has now been addressed.
The Whats's New tour dialog will now be displayed for new SSO users /
SSO users logging in after a version update.
When changing the displayed item range for any table within the Kubernetes view, under certain circumstances the app would
whitescreen—this issue has now been fixed. In addition, the summary total of items shown for each table is now displayed correctly.
The documentation link provided in the Add Cluster popup in the
Kubernetes view is now correct.
12.1.29.19 - Anchore Enterprise Release Notes - Version 4.0.3
Anchore Enterprise 4.0.3
Anchore Enterprise v4.0.3 is a patch release containing targeted fixes and improvements. No database upgrade is necessary.
Enterprise Service Changes
Improvements
Expanded capability for users with an image-analyzer role. The role now has the ability to create subscriptions.
Added Amazon Linux 2022 vulnerability feed to the amazon driver. This will populate the amzn:2022 namespace.
Support added for cataloging RPM databases with NDB and sqlite formats.
Improved handling of manifest lists where the mediaType is missing.
Details of Event Notifications have been improved for the following events:
user.image.analysis.pending
user.image.analysis.processing
user.image.analysis.complete
user.image_tag.added
Fixes
The global archive rule for max number of images per account, will only consider images that have been analyzed and are in the active state.
In some configurations, the global archive delete rule failed to run do to an error in the order of the rule processing. This issue has been corrected.
UI Updates
Improvements
The Applications button in the navbar will remain highlighted
when presented with the Artifact Analysis view for a source
item, as sources are considered part of the navigation path for
applications. In addition, this button will also now indicate the
last application and version viewed (if applicable) in a popup on hover.
Grab targets for Dashboard items widget have been increased inside for
easier focus and manipulation.
The legends associated with charts in the app have been removed
in all instances where the meaning of the data is otherwise
indicated.
The createSubscription permission is now a requisite to use
components that create subscriptions—this permission has also been
added to roles where it was missing yet required in the context of
the general purpose of that role (for example, image-analyzer).
Non-alphanumeric characters are now permitted in the password used
to authenticate against the AppDB service.
Fixes
The Copy Allowlist modal within the Tools dropdown for items
displayed within the Alowlists tab of the Policy Editor had
an issue whereby focus would be drawn away from the Name input
field, preventing the submission of a valid form. This behavior is
now fixed.
Due to a regression in our date component, removing a timestamp
from the Edit Allowlist Items dialog in the Policy Editor or
from the Add / Remove Allowlist Item dialog associated with
compliance results in the Artifact Analysis view would result in
an error on save. This has been fixed.
The Copy Policy modal within the Tools dropdown for items
displayed within the Policies tab of the Policy Editor
would successfully copy a policy, but would fail to close after the
operation concluded. This behavior is now fixed.
The summary count of event items in the Events view now
correlates to the number displayed after a severity filter
(WARN / INFO) has been applied. Prior to this fix, the count
would remain the same.
Removing all filter boxes displayed within the Events view would
also remove the Clear Filters button, preventing any filters
previously applied from the boxes from also being removed. This
behavior has now been fixed.
An error in our payload validation system caused the notifications
component to fail to update upon editing an entry. This issue has
now been resolved.
The RBAC permissions associated with policy-editor role are now
correctly asserted when trying to navigate to Images or
Applications using the main navigation bar (or when using the
minimized icons in the topnav that appear when the main bar is out
of view).
Various supporting libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from
browser and server output logs. Redundant libraries have been
removed to reduce the app startup time and overall size.
12.1.29.20 - Anchore Enterprise Release Notes - Version 4.0.2
Anchore Enterprise 4.0.2
Anchore Enterprise v4.0.2 is a patch release containing targeted fixes and improvements. No database upgrade is necessary.
Enterprise Service Changes
Improvements
Expanded capability for users with an image-analyzer role. The role now has the ability to modify image subscriptions.
Added support of space characters in usernames.
It is now possible to create an account name that contains a forward slash character.
Added support for policy bundle license gate triggers to differentiate between os and non os components during package license checks.
Tech Preview: Added the ability to run a stateless vulnerability scan of a sbom via a new API call.
Added new event types which improve visibility during the image analysis workflow.
user.image.analysis.pending
user.image.analysis.processing
user.image.analysis.complete
user.image_tag.added
Fixes
Fixed the growth of log files beyond 10MB and collection of log files when reaching the maximum count of 10 files.
Fixed the detection of vendored golang modules in built binaries.
Fixed the analysis of images with binaries built by golang 1.18 to correctly identify the go modules used.
Improved grypedb to exclude matching entries that have been withdrawn from GitHub Security Advisories.
Improved grypedb to handle entries without primary vulnerability identifier which may be received from vulnerability feed services.
Fix ability to update existing ECR registry credentials, no longer reports an 406 error.
UI Updates
Improvements
The content types within the SBOM tab under Artifact Analysis in the UI are now presented vertically to prevent them being truncated at narrower screen widths.
It is now possible to create an account name that contains a forward slash character
In order to improve the filtering and sorting operations within the
Mappings tab of the Policy Editor in the UI, source and image mappings are now stored within their own dedicated subtabs.
Various supporting UI libraries have been updated in order to improve
security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
Fixes
When resizing the table columns in the UI Applications view, the action
controls could be made to overflow the bounds of their cell—this is now fixed.
Items added via anchorectl would occasionally cause an app exception when
viewed from within the Applications view in the UI. This issue has now been addressed.
The JSON entry in the SBOM Report download menu in the UI within the
Artifact Analysis view had an extraneous tail pointer, which has now been removed.
The button to download a report from the Vulnerabilities tab in the
Artifacts view in the UI now correctly reads Vulnerability Report instead of Compliance Report.
The no-results condition in the content view for Malware under the SBOM tab
of the Artifact Analysis view in the UI did not disambiguate between no results being found vs. malware scanning not being enabled. If malware scanning is not enabled, the message now indicates this and provides a link to the documentation for this feature.
As of release 4.0.0, the default behavior when creating a new policy bundle
was to add a default source rule and mapping, however this interfered with the upgrade path for users who wanted to upload a pre 4.0.0 bundle to the system. These default entries are no longer added.
The information and recent creation indicator labels within the Stored
Report Items component of the advanced Reports view in the UI are now
correctly aligned.
Switching account context within the UI and then attempting to download a
report would result in a fatal app error due to missing privileges on the call that fetches the data. This issue has now been addressed.
A slight error in the alignment of the header within the UI date picker
component has been addressed.
12.1.29.21 - Anchore Enterprise Release Notes - Version 4.0.1
Anchore Enterprise 4.0.1
Anchore Enterprise v4.0.1 is a patch release containing targeted fixes and improvements. No database upgrade is necessary.
Fixes
Fixes issues with vulnerability data matching for a small set of distros including Ubuntu, Oracle Linux, and Amazon Linux. All customers are recommended to upgrade to include this patch.
AnchoreCTL
The latest version of AnchoreCTL is 0.2.0.
AnchoreCTL is dependent on Syft v0.39.3 as a library.
AnchoreCTL v0.1.4 is vulnerable to CVE-2022-1766, which was fixed in v0.1.5+. We strongly encourage users to upgrade to the latest version.
The current features that are supported are as follows:
Ability to add sboms via anchorectl using stdin to provide an existing SBOM without re-creating it.
Source Repository Management: Generate an SBOM and store the SBOM in Anchore’s database. Get information about the source repository, investigate vulnerability packages by requesting vulnerabilities for a single analyzed source repository, or get any policy evaluations.
Download full image SBOMs for images analyzed with Enterprise 4.0.0.
Compliance Reports: View and operate on runtime compliance reports, such as STIGs, created by the rem tool.
Corrections Management: View and modify corrections information to help reduce false positives in your vulnerability results.
Image Management: View, list, import local analysis, and request image analysis by the system.
Runtime Inventory Management: Add, update, and view cluster configurations for Anchore to scan, as well as for the inventory reports themselves.
System Operations: View and manage system information for your Enterprise deployment.
12.1.29.22 - Anchore Enterprise Release Notes - Version 4.0.0
Anchore Enterprise 4.0.0
The Anchore Enterprise 4.0.0 release offers significant new supply chain security features expanding the Anchore Enterprise SBOM management platform beyond container scanning. Users can now generate and continuously monitor SBOMs for their source code repositories to identify vulnerability and security risks. Policy rules that are specific to managing source code are now available in the Policy Engine. Multiple source code and container image SBOMs can also now be grouped together as an Application that can be managed as a single set enabling generation of SBOMs representing a total application or service.
Additional new SBOM capabilities enable users to observe and limit SBOM drift between container image builds. Users can use policy rules to enforce immutable container best practices or help detect potentially malicious activity.
AnchoreCTL, the integration tool for use in CI/CD pipelines, has also been updated to include Source Repository Management.
A number of performance improvements have also been made to improve the response of the GUI, reporting service, as well as the efficiency of the queue processing processes.
Version 4.0.0 also includes other improvements and fixes.
New Features
The following new features are included in Enterprise 4.0.
SBOM Management
You can now generate SBOMs using AnchoreCTL as part of a command line or CI/CD workflow, through pulling content from a registry, or by submitting an artifact to the Anchore API.
SBOMs can be managed using the command line, API or GUI, where contents can be grouped together, annotated, viewed, or searched. Artifact metadata, vulnerability information, and policy evaluations can also be viewed and managed through the same interfaces.
All SBOMs can be downloaded into a variety of formats, either individually or collectively, to be sent to security teams, customers or end-users.
Applications
You can now build applications in Anchore Enterprise. Applications are the top-level building block in a hierarchical view, containing artifacts like packages or image artifacts. Applications can represent any project your teams deliver. Each application is associated with one or more application versions which track the specific grouping of artifacts that comprise a product version.
Anchore Enterprise lets you model your versioned applications to create a comprehensive view of the vulnerability and security health of the projects your teams are building across the breadth of your Software Delivery Lifecycle.
By grouping related components into applications, and updating those components across application versions as projects grow and change, you can get a holistic view of the current and historic security health of the applications from development through built image artifacts.
SBOM Drift
You can now set triggers for policy violations on changes in the SBOM between images, with the same tag, so that it can detect drift over time between builds of your images.
There is a new gate called:
tag_drift
The triggers are:
packages_added
packages_removed
packages_modified
Legacy Vulnerability Scanner No Longer Supported
The legacy vulnerability scanner is no longer included as an option when installing or upgrading Anchore Enterprise.
If you currently have Enterprise configured to use the legacy vulnerability scanner, you will not be able to successfully upgrade and start the system without explicitly configuring the default vulnerability scanner.
You can remove that configuration variable so the system can default to the current vulnerability scanner.
If you choose not to upgrade, instead performing a new installation of Enterprise, the vulnerability scanner will be configured by default.
Improvements
Analyzers no longer wait 5 seconds between analysis tasks if the last queue check had work available.
Adds global image count metrics to the set of available prometheus metrics.
Adds new internal reports_worker service that processes async tasks for reporting data.
Adds reporting task (data load, refresh, etc) to the set of available prometheus metrics.
System can be configured to automatically delete events older than a specified age to help manage data growth.
Go modules detected and reported from within binaries.
Removes old and unsupported PG8000 DB driver from container image. Database connection strings starting with “pg8000:” will no longer work.
Default PostgreSQL version used in the Quickstart docker-compose.yaml is updated from 9.6 to 12. Anchore’s Postgres requirements are unchanged.
Reporting service can be configured to remove reporting data for images deleted or archived.
Reporting service data update performance improvements and scalability improvements.
Fixes
Resolved data leaks from the grypedb feeds driver that could occur when process terminated by OS.
Resolved reporting service refresh issue.
Reporting service no longer looks at or attempts to refesh deleted image data. Reporting procedures now operate on data sets where the analysis state is analyzed and image state is active.
There was an issue with the Debian driver providing empty content. The grype-db-builder now builds all Debian data in the feeds service.
The NVD CVSS scores known issue from the 3.3.0 release has been fixed. NVD CVSS scores are now present in the API responses for the request to get a detailed information query about a vulnerability feed record.
Stale feeds policy trigger issue fixed.
Report worker tag refresh issue fixed.
Fixes vulnerability scanning failures for container images with no known distro.
Known Issues
The vulnerability scanner needs to be explicitly configured for Grype. If it is configured for v1 (legacy) vulnerability scanner, you will get an error during upgrade.
Workaround:
Helm chart: Set services->policy_engine->vulnerabilities->provider to grype.
Docker compose: The environment variable ANCHORE_VULNERABILITIES_PROVIDER=grype must be present for the policy-engine service.
Image drift only supports comparison of images analyzed by 4.0.0. Images analyzed prior to upgrade do not support drift computation and will result in a policy evaluation warning message.
Image SBOM downloads do not include content hints entries or detected binaries (python, go) that are not installed via a package manager.
Enterprise UI Changes
New Applications tab. Observe applications in Enterprise and see a summary of the artifacts that have been collected into an application. From the application view, you can drill down into the source repositories or container images that make up the application, and browse their SBOMS.
View applications and application versions from source repositories and image containers. The information is categorized by applications, with sub-categories of application versions available.
You can download an SBOM report in JSON format for everything in an application.
View information about an artifact, such as the policies set up, the vulnerabilities, SBOM contents,and metadata information.
From the Policies tab, set up policies and policy bundles for source repositories.
Fixes
Fixed inventory view performance issues with large data sets.
Report manager now returns preview results.
The inventory service error now returns the appropriate 500 error message.
Fixed how the ordering of policy bundle mappings are displayed within a table.
Increased the LDAP filter character limit.
Package names within the Vulnerability tab are now sorted alphanumerically.
Upgrading
Upgrading to Anchore Enterprise 4.0.0 involves a database upgrade that the system will handle itself. It may cause the upgrade to take several minutes.
If you currently have Enterprise configured to use the legacy vulnerability scanner, you will not be able to successfully upgrade and start the system without explicitly configuring the default vulnerability scanner.
You can remove that configuration variable so the system can default to the current vulnerability scanner.
If you choose not to upgrade, instead performing a new installation of Enterprise, the vulnerability scanner will be configured by default.
AnchoreCTL
The latest version of AnchoreCTL is 0.1.4.
AnchoreCTL is dependent on Syft v0.39.3 as a library.
The current features that are supported are as follows:
NEW! Source Repository Management: Generate an SBOM and store the SBOM in Anchore’s database. Get information about the source repository, investigate vulnerability packages by requesting vulnerabilities for a single analyzed source repository, or get any policy evaluations.
NEW! Download full image SBOMs for images analyzed with Enterprise 4.0.0.
Compliance Reports: View and operate on runtime compliance reports, such as STIGs, created by the rem tool.
Corrections Management: View and modify corrections information to help reduce false positives in your vulnerability results.
Image Management: View, list, import local analysis, and request image analysis by the system.
Runtime Inventory Management: Add, update, and view cluster configurations for Anchore to scan, as well as for the inventory reports themselves.
System Operations: View and manage system information for your Enterprise deployment.
12.1.29.23 - Anchore Enterprise Release Notes - Version 3.3.0
Anchore Enterprise 3.3.0
This release offers Rocky Linux support and various UI updates.
Version 3.3.0 also includes other improvements and fixes.
Rocky Linux support
Anchore Enterprise can now scan Rocky Linux images for vulnerabilities.
Configure maximum number of parallel workers
Asynchronous parts of the image deletion workflow in the backend can now be parallelized. You may now configure the maximum number of parallel workers in the catalog configuration.
Fixes
Images that had Go content and hints enabled were failing analysis. This has been fixed.
Images reported via runtime inventory that also had port numbers in the registry host URL were failing to parse properly, which caused scan failures. This issue has been fixed.
NuGet packages were not matched to vulnerabilities correctly. This is now fixed.
With the Grype provider, NVD and vendor CVSS scores were missing for records in non-NVD namespaces. This is now fixed.
Migration code was added to clean-up the unused feed records, and fixed artifacts and vulnerabilities records for the github:os group.
Known Issue
NVD CVSS scores may not be present in the API responses for the request to get a detailed information query about a vulnerability feed record.
There is a workaround to get this information. See the Workaround section below for more details on the workaround.
This is only present for a subset of records NVD records.
It does not impact the vulnerability reports or findings for images. It only impacts the next-gen vulnerability scanner, so users still on the legacy scanner are not impacted.
Details
The /query/vulnerabilities API response contains an nvd_data attribute for each vulnerability in the result. The value of the attribute represents the NVD assigned CVSS scores. This field is not correctly populating for a small subset of vulnerabilities in the system. Instead of a list of results, the value is a null reference as shown below.
Note: This known issue only affects vulnerabilities that exclusively belong in the nvd namespace with Grype as the vulnerabilities provider (next-gen v2 scanner). It does not affect the legacy vulnerability provider.
The API supports a namespace query parameter to filter results based on the namespace. Supply the namespace with an nvd value to view the NVD CVSS scores, as shown in the following example.
Multi-image selection and deletion now possible for RepositoryView.
The login page banner can now be edited. You can now edit the banner on the login page to provide customized information, such as how to log in, whether to use SSO or email addresses, and support contact information.
Failed images can now be removed from a repository.
The context of the policy bundle test results view is now preserved as a user changes to different tabs.
Fixes
JSON and CSV downloads from the Policy Compliance tab now include the policy bundle name and data.
Compliance tables now correctly filter based on column data.
Upgrading
Upgrading to Anchore Enterprise 3.3.0 involves a database upgrade that the system will handle itself. It may cause the upgrade to take several minutes.
AnchoreCTL
The latest version of AnchoreCTL is 0.1.3.
AnchoreCTL is dependent on Syft v0.20.0 as a library.
The current features that are supported are as follows:
Compliance Reports: View and operate on runtime compliance reports, such as STIGs, created by the rem tool.
Corrections Management: View and modify corrections information to help reduce false positives in your vulnerability results.
Image Management: View, list, import local analysis, and request image analysis by the system.
Runtime Inventory Management: Add, update, and view cluster configurations for Anchore to scan, as well as for the inventory reports themselves.
System Operations: View and manage system information for your Enterprise deployment.
12.1.29.24 - Anchore Enterprise Release Notes - Version 3.2.1
Anchore Enterprise 3.2.1
v3.2.1 is a patch release of Anchore Enterprise containing targeted fixes and improvements. No database upgrade is necessary.
Enterprise Service Changes
Fixes
Feed syncs no longer fail for GitHub groups.
Images failing analysis due to specific unexpected python package format issue fixed in Syft to ensure analysis can complete.
Content hints now correctly scan non-OS packages for vulnerabilities.
The Syft invocation during image analysis now uses the analyzer unpack directory consistent with other analysis data IO instead of the OS default temp directory.
Enterprise UI Changes
Fixes
Image Analysis: Vulnerabilities. RPM packages were displayed as RedHat packages in the Vulnerabilities tab, and they used the RedHat icon for SuSE images. The RPM icon is now used, and the package type is now simply described as RPM.
Image Analysis: Ancestry. Image ancestry fetch errors are now gracefully handled inline and do not block image analysis calls if they occur.
Image Selection: Add Image/Add Repository. Opening the Add Registry dialog from the Add Image or Add Repository dialogs would cause the tooltips on the initial dialogs to flicker if you attempted to view them after dismissing the Add Registry dialog. This is now fixed.
Kubernetes Inventory: Analyze Image. On initial presentation of the list of any images detected within a namespace, the buttons that allowed you to analyze new images would be disabled. This was due to an RBAC permission error. This issue is now fixed.
LDAP: Connectivity. LDAP authentication connection timeouts have now been externalized in order to allow customers to directly configure these thresholds, if necessary. These values can be set via the file- or environment-based Enterprise Client application configuration parameters.
Miscellaneous. Various supporting libraries have been updated to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have also been removed to reduce the app startup time and overall size.
Upgrading
No database upgrades are required for this update.
12.1.29.25 - Anchore Enterprise Release Notes - Version 3.2.0
Anchore Enterprise 3.2.0
This release brings the Next-Gen (v2) vulnerability scanner, based on Grype, from Tech Preview into full support and makes it the default for new Anchore Enterprise deployments.
Version 3.2.0 also includes other improvements and fixes.
New Features
Next-Gen scanner is now the default vulnerability scanner
Anchore Enterprise now uses the Next-Gen scanner, based on Grype, for vulnerability scanning. The new scanner replaces the legacy vulnerability scanner, but legacy remains available.
Only new installations will default to the new scanner. Upgrades for existing deployments will use the same scanner as the pre-upgrade deployment unless specifically configured to change.
SUSE Linux Enterprise Server (SLES) support
Anchore Enterprise can now scan SLES and OpenSUSE images for vulnerabilities.
Allow trigger IDs to be added to allow lists
To allow list a package and version rule, a mechanism to allow list items other than vulnerabilities has been added to the app.
Fixes
Dependency updates to resolve vulnerability findings.
Enterprise UI Changes
Added
New Secret Search content tab. Secret Search results are now available within the Image Analysis → Contents page. These artifacts are already calculated during analysis, but were not previously visible in the UI.
New Content Search content tab. Content Search results are now available within the Image Analysis → Contents page. These artifacts are already calculated during analysis, but were not previously visible in the UI.
New Retrieved Files content tab. Retrieved Files results are now available within the Image Analysis → Contents page. These artifacts are already calculated during analysis, but were not previously visible in the UI.
Add / Edit Registry Credentials feature now is accessible from the Account menu. Since the Registry Credentials are at the account level, they were moved from the System view to the top-right Account menu. It is also accessible from within the Analyze Repository / Tag modals.
View package metadata from the Vulnerabilities tab main table. As a SecOps user, you can now see more information about a package listed with a vulnerability in the Vulnerabilities tab main table. You can click the Package column entry to assess the impact, and determine if the vulnerability match may be a false positive.
Analyzing images can now be removed in bulk via the Analysis Cancellation / Repository Removal dropdown.
Content tab data is now cached. Content type tabs within the Image Analysis → Contents page are now lightly cached for performance.
Permit gates other than vulnerabilities to be added to an allowlist. This includes package version triggers, and more.
Descriptions can be added upon allowlisting a trigger from within the Image Analysis → Policy Compliance tab.
Fixes
View Reports tab now available for any user with listImages permissions.
Severities filter is now properly handled for scheduled Runtime Inventory Images by Vulnerability queries.
Table columns are automatically resized. When table column widths are greater than the total width of its container, they automatically resize to avoid overlap of text.
LDAP user mappings are now removed upon account deletion.
Miscellaneous: Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
Upgrading
With the new scanning engine there may be slightly different vulnerability results due to improved accuracy. It is highly
recommended for you to reach out and partner with Anchore Support for planning and managing the upgrade to ensure minimal
disruption for your workflows and workloads.
Upgrading to Anchore Enterprise 3.2.0 involves a database upgrade that the system will handle itself. It may cause the upgrade to take several minutes.
12.1.29.27 - Anchore Enterprise Release Notes - Version 3.1.0
Anchore Enterprise 3.1.0
This release adds new capabilities for automated runtime inventory scanning, runtime container compliance checks, a new
vulnerability scanner option in tech preview, a new enterprise CLI, as well as other improvements and fixes.
New Features
Runtime Kubernetes Inventory Scanning with UI Support
Building on the runtime inventory features in the 3.0 release, Anchore can now automatically analyze images reported as in
use in kubernetes clusters so that you can easily assess the security risks not only of image in your CI pipelines, but also
in production in your clusters. Additionally, the UI now supports visualizations of kubernetes inventories and the vulnerability
and policy compliance status of the inventory by namespace or cluster.
Anchore now includes the ability to execute and collect runtime compliance checks using industry standard tooling such as OpenSCAP to provide evaluation of running
containers’ STIG compliance or any other compliance specification that can be described and checked using XCCDF profiles.
The new anchorectl tool provides a new Enterprise-focused CLI experience with support for local analysis of images to import
into your deployment. Using the new tool you can also perform other Enterprise operations such as interacting with new compliance reports
and viewing or configuring inventory scanning.
Tech Preview Features
A new vulnerability scanner based on Grype is now available in tech preview. See Vulnerability Scanner V2 for more information.
This update is not configured by default and must be set by opt-in using a configuration value.
Enterprise Service Changes
This release contains a database schema update to version 0.0.8 for the enterprise schema and 0.0.15 for the engine schema.
The upgrade process will modify the db schema and update some tables in the reporting service for any existing runtime
inventory records. Unless you have a very large number of inventory records, the upgrade should complete in seconds to minutes depending
on your database size.
Owned Package Filtering Control
A new configuration option: services.analyzer.enable_owned_package_filtering: is now available in the analyzer service configuration.
By default, the analyzer will filter packages that are determined at analysis time to be “owned” by a parent package when that package
installs all the files of the child package. That behavior can be disabled by setting this configuration value to “false”.
The default filtering removes false positives associated with packages installed by distro packages that install language
packages like python, npms, or gems and have backports applied by the distro maintainer with no corresponding
language package version change. However, if you package your own applications as rpms, debs, or similar and need to
ensure all included packages are scanned directly against NVD sources, then you can disable this behavior.
Added
New tech-preview vulnerability scanner
Improved alpine vulnerability scanning by using NVD matches for OS packages for CVEs that are not yet present in Alpine SecDB
Analyzer service configuration option to control package-ownership filtering. Allows exposing all packages regardless of ownership relationship
Fixed
Adds missing fields and fixes errors in the swagger spec for the API
Restores file package verification data ingress during image load to fix a regression
Malware policy gate can fail causing policy eval error when malware not enabled and other rules precede malware rule in a policy
JSON serialization error in internal policy engine user image listing API
“package_cpe23” field missing in vulnerabilities
Ensure python38 used in the Dockerfile build, and set tox tests to only run py38
User to not be able to delete some notification configurations when they should be able based on RBAC role
Improved
Performance of GET operations between services improved by better streaming memory management for large payload transfers
Use UBI 8.4 as base image in Docker build
Updates skopeo version used to 1.2.1, allowing removal of the ’lookuptag’ field in the POST /repositories call for
watching repositories that do not have a ’latest’ tag
RedHat packages for an Out-of-Support distro release version now indicated as being vulnerable if a newer distro release version is supported and indicated as affected for the package.
Additional minor bug fixes and enhancements
Known Issues/Errata
Note: the policy engine feed sync configuration is now in the policy engine service configuration as part of the provider
configuration. The provided helm charts, docker-compose.yaml and default configurations handle this change automatically.
Deprecations
The affected_package_version query parameter in GET /query/vulnerabilities is not supported in the V2 scanner (aka Grype mode)
and has known correctness issues in the legacy mode. It is deprecated and will be removed in a future release.
Enterprise UI Changes
Added
From the new Kubernetes Runtime Inventory view you can now inspect
the spread of compliance and vulnerability information reported by
the KAI agent across all detected
Kubernetes clusters and namespaces in your deployment topology
Information relating to any items detected by the runtime agent is
now surfaced in the repository- and tag-level views within the Image
Selection hierarchy
Improved
If the reporting service fails, feature components that require this
service as a dependency will be disabled in the navigation bar until
service recovery
Pie-chart components have been restructured to present selected
information inclusively when segments are clicked—other segments
are now disabled
Fixes
Printable view assembly issues addressed in Image Analysis Vulnerability
and Compliance views—charts now render correctly in portrait mode
The alerts banner is now subject to RBAC and will not appear if the
fetch alert permission is not detected
Clipping issues resolved in the creation date popup in the Policy Bundle view
Supporting libraries have been updated in order to improve security,
performance, and also to remove deprecation warnings from browser
and server output logs
You can use the Anchore runtime compliance API to gain insight into the security compliance of runtime environments. Tools responsible for executing compliance checks on a running environment are the intended consumers of this general-purpose API, such as the Security Technical Implementation Guides (STIGs) that users can run on a Kubernetes cluster using Anchore’s Remote Execution Manager (REM). These tools can upload the results of an execution to Anchore through this new compliance API, which allows users to leverage additional Anchore functionality like reporting and correlating the runtime environment to images analyzed by Anchore. This enables deeper understanding and insight into an image’s lifecycle and the ongoing security of the runtime environments deploying them.
Usage
The Compliance API can be found in the Enterprise API swagger specification. This API allows for the creation and retrieval of runtime compliance checks and any document reports provided in the creation calls.
The following is an example of the body of an API call to create a runtime compliance check using the Compliance API to be submitted as a multipart form to support file upload:
{"check_type":"oscap",// type of compliance check to report
"result":"pass",// overall result of compliance check
"pod":"postgres-9.6",// k8s or kubernetes pod the compliance check was run against
"namespace":"dev",// the namespace of the pod
"image_tag":"9.6",// tag of the image that the pod is running
"image_digest":"sha256:a435b8edc3bdb4d766818dc6ce22ca3a5e6a922d19ca7001afd1359d060500eb",// the digest of the running image
"start_time":"2021-03-22T15:12:24.580054",// start time of the compliance run
"end_time":"2021-03-22T16:02:24.580054"// end time of the compliance run
"result_file":"path_to_file","report_file":"path_to_file}
Two fields are required for the creation of runtime compliance checks. The type field references the type of scan that generates the report. The only supported option is oscap, which stands for OpenSCAP. The other required field is image_digest, which represents the image used by the container that the runtime compliance check was run against.
While not required, the status attribute is used to designate whether the given compliance check has passed or failed. There are several additional metadata fields provided to further contextualize the runtime check, such as the pod and namespace that the check was run against.
One of the other key functionalities of this API is the ability to attach a report_file and a result_file to the created runtime compliance checks. This can be the direct output generated by the runtime tool itself, such as an OpenSCAP XML document. This allows for entire reports to be stored within Anchore using the object storage, which allows for a number of options for how and where this data will be preserved.
Once created, runtime compliance checks can be retrieved using the GET endpoint specified in the Swagger spec. The corresponding result and report files can be retrieved by pulling the file_ids from a runtime compliance check and querying the endpoint for runtime compliance results using the specified result_id.
12.1.29.27.1.1 - REM
Remote Compliance Check
Anchore Enterprise Remote Execution Manager (REM) enables an operator to run a STIG compliance check for a defined container
within a Kubernetes Cluster. REM contains functionality to perform package management such as installation and removal
of OpenSCAP, retrieval of generated results files, and upload capabilities to the compliance API. There is also a provided
local data-store if upload functionality is disabled or unavailable.
REM can work well out-of-the-box with minimal required configurations.
At the very least, REM needs to be able to authenticate with the Kubernetes API, know which command to run, and know which
pod and container to connect to. If you have a Kube Config at ~/.kube/config, REM will use that by default.
It is always recommended to use the configuration file that is attached to each release as an artifact.
The example configuration file in the repository is a good reference for explaining which configuration key does what.
Pod configuration
This section will describe the minimum required configuration required for REM to work.
In the file, you can specify kubernetes pod information in the following section:
# This section tells REM the execution details for the STIG check report:# Pod Name, Namespace, and Container Name are required so that REM knows where to exec the stig checkreport:podName:"centos"nameSpace:"default"containerName:"centos"# These must be set via the file, and correspond to the command being executed in the container# For example, if your compliance check command looks like this:# oscap xccdf eval --profile <profile> --results /tmp/anchore/result.xml --report /tmp/anchore/report.html target.xml# The values should for --results and --report should match the values of these configurations.# The file paths defined here are also where REM downloads the files from the container. You can think of it like this:# docker cp container:/tmp/anchore/report.html /tmp/anhore/report.htmlreportFile:"/tmp/anchore/report.html"resultFile:"/tmp/anchore/result.xml"# REM supports Kubernetes Configuration in the following manner:# 1. If you have a Kubeconfig at ~/.kube/config, you don't need to set any of these fields below, REM will just use that# 2. If you want to explicitly specify kubernetes configuration details, you can do so in each field below (ignore path)# 3. If you are running REM within Kubernetes, set path to "use-in-cluster" and set cluster to the cluster name and you don't need to set any of the other fieldskubeconfig:path:""# set to "use-in-cluster" if running REM within a kubernetes containercluster:""clusterCert:# base64 encoded cluster certserver:# ex. https://kubernetes.docker.internal:6443user:type: # valid:[private_key, token]clientCert:# if type==private_key, base64 encoded client certprivateKey:# if type==private_key, base64 encoded private keytoken:# plaintext service account token
As an alternative, or a way to override the setting in the configuration file on the command line, you can pass a few flags to set new values.
Here, <cmd> is the full oscap command to execute within the container, and the args before the double hyphen '--' are telling REM where to run the command
$ rem kexec -n <namespace> -p <pod> -c <container> -k <kubeconfig-path-override> -- <cmd>
Example (this will use kubeconfig at ~/.kube/config)
$ rem kexec -n default -p anchore-pod -c anchore-container -- oscap xccdf eval --profile standard --result /tmp/result.xml --report /tmp/report.html target.xml
Note: The double hyphen -- is important because it tells REM that all subsequent flags should be passed to the container command
A full list of the options supported by the rem kexec command can be found by running the command with the -h or --help option
i.e.
rem kexec --help
Compliance Tool Installation
Enable the following section in the configuration file.
command:...oscap:# This boolean flag tells REM whether or not to try to install OpenSCAP into the container (if the command is oscap)installEnabled:true# This boolean flag tells REM whether or not to try to uninstall OpenSCAP from the container# (after the oscap command runs and the result/report files get downloaded)uninstallEnabled:true
After the installation option has been enabled this will allow the operator to manually install the compliance tool
or allow REM to automatically install the missing tool needed to run the compliance check.
note: uninstallEnabled can be set to false if you intend on leaving the tool available.
Running the following will install OpenSCAP but this is not mandatory.
> rem kexec install oscap
Run a compliance check
There are two options on how to run the check. The first is from the command line. The second method
is to have REM read it from the configuration file.
command:# If no command is specified through arguments passed to the application on the command line, this command will be used# Each element of the list is interpreted as part of the command# I.E. echo 'hello-world' > /tmp/test.txt would look like:# cmd:# - echo# - 'hello-world' > /tmp/tst.txtcmd:oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_stig --fetch-remote-resources --results /tmp/anchore/result.xml --report /tmp/anchore/report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml
Once the check has completed the report and results file should be located in the set path passed into openSCAP.
Custom STIG targets
REM has the option to allow the operator to specify a custom target by setting a path under customTargetPath.
# If a custom OSCAP profile is desired, specify it's path here# Note: this will be placed into a /tmp/anchore/ directory in the container at runtime, so the command being executedcustomTargetPath:<local path to target>/custom.xml
Audit uploads
REM has an audit database which is used to track which compliance checks have been successfully run, this also
serves as a method to ensure fault tolerance in the case where reports have not been uploaded do to unavailable
service connections to Enterprise. REM will mark those uploads as incomplete allowing the operator to issue a flush
command and push the remainders to Enterprise.
Database subcommand
To list the current state for all past transactions issue the following command:
> rem db list
In order to retreive detailed information about a transaction use the db get command with the id:
> rem db get 1
To push all results which have been marked as not uploaded, issue the follow command:
note: the –dryrun flag will show you the records which will be processed
A new vulnerability scanning engine, based on Grype improves performance, reduces
database load, and provides better vulnerability matching results. It includes a new vulnerability feed sync process
integrated into the enterprise feed service that also provides faster feed syncs from the feed service to the policy engine.
Tech Preview Status
Note: The v2 scanner is intended for use in sandbox or staging environments in the current release. It is not possible to
run both vulnerability scanners at the same time. This configuration is picked up at bootstrap, and cannot be changed
on a running system.
The new mode must be set at deployment time, the scanner is configured at service startup.
Switching modes in a deployment is not supported.
Downgrading from the v2 scanner back to the legacy scanner is not supported.
Some features of the policy system are not yet supported in this mode:
vulnerability gate triggers not supported for the new scanner. They will return incorrect results when used.
vulnerability_data_unavailable
stale_feed_data policy
Windows container scanning is not yet supported. Support will be added in the next feature release.
Proprietary vulnerability feeds are not yet supported in this scanner. Support will be added in the next feature release.
Running with docker compose
Install or update to Anchore Enterprise 3.1.0
Add the following environment variable to the policy engine container section of the docker compose file:
After making the relevant change above and redeploying, the system will start up with the v2 vulnerability scanner enabled and will
sync the latest version of grype db as built by your local feed service. Note that legacy feeds will no longer be synced while the v2 scanner is configured. All vulnerability data
and scanning will now come from the ‘grypedb’ feed.
Vulnerability Feed Data and Syncs
The v2 scanner has its own feed sync mechanism that generates a Grype vulnerability DB from your locally installed feed
service instead of https://ancho.re as used by Grype itself. This results in a much faster sync process since the DB is
packaged as a single database file. It also reduces load on the Engine DB since the scanner matching and syncs do not
require large amounts of writes into the Engine DB. The Grype vulnerability DB is built from the same sources as the
legacy service, so there is no reduction in scan coverage or vulnerabilities sources supported.
The feed synced by the Grype provider is identified as feed name ‘grypedb’ when using the feed listing API or anchore-cli system feeds list CLI command.
12.1.29.28 - Anchore Enterprise Release Notes - Version 3.1.1
Anchore Enterprise 3.1.1
v3.1.1 is a patch release of Anchore Enterprise containing targeted fixes and improvements. No database upgrade is necessary.
Enterprise Service Changes
Note: the Content Hints feature now only supports adding new packages to the analysis report and can no longer modify or update package records found
by the package analyzers. This is to ensure unintended conflicts do not occur.
Fixes
Some RPMs content in hints file causes analysis to fail
Feeds service driver for MSRC safely handles unexpected data from upstream
Feed service MSRC driver should not require a Microsoft API key
Ubuntu feed service driver git repo sync error
Github feed service driver incorrectly categorizes some data
Sometimes get error when trying to analyze image due to finding unsupported package types
Remove unused nvd scores from normalized vulnerability records
Alpine feeds driver to use CVSS v3 for severity scoring instead of CVSS v2
Events not generated correctly if an image digest has multiple tags
Ensure content hints do not conflict with findings from analyzers and only add entries and cannot modify existing analysis finings
SSL Error handling in swift objectstorage driver
Syft/Stereoscope cache in /tmp not cleaned up after image analysis
Adds will_not_fix field to vulnerability report API response
Adds will_not_fix field to /query/vulnerabilities response
Wrong tag may be used for image download during analysis if the digest is mapped to multiple tags
Dependency updates to resolve non-impacting vulnerability findings
Additional minor bug fixes and enhancements
Enterprise UI Changes
Fixes
Socket protocol now enforceable via configuration to avoid false positives with application scanners
Allow expiration of allowlist item to be set via Vulnerabilities table view in Image Analysis
Security vulnerability in package WS addressed
Add/Edit User & Add/Edit LDAP User Mapping modal content overflows issue fixed
Enable System button for users with correct requisite permissions
Users without correct permissions can no longer directly access app routes via URL
Default allowlist expiration now set to 30 days
Items with vulnerabilities inherited from base image can now be excluded by filter in Vulnerabilities view in Image Analysis
Users can now be prevented from accessing the app for a configurable amount of time after a configurable number of invalid login attempts
Improved internal field validation to prevent unexpected input in AppDB report routes
Additional minor bug fixes and enhancements
Upgrading
No database upgrades are required for this update.
AnchoreCTL
Updates vendor_only option to default to true for consistent experience with users coming from anchore-cli
12.1.29.29 - Anchore Enterprise Release Notes - Version 3.0.3
Anchore Enterprise 3.0.3
v3.0.3 is a patch release of Anchore Enterprise containing targeted fixes and improvements. No database upgrade is necessary.
Enterprise Service Changes
Fixes
Better vulnerability listing API call performance
Fixes regression in 3.0.0+ that made “hints” feature cause analysis errors of images for some package types
Large image analysis load failures from catalog to policy engine due to connection timeout. Makes timeout configurable.
Updates internal Syft to 0.15.1 to reduce java package CVE false positives and include CPE permutations that replace hyphens with underscores for better matching
Fixes Ubuntu feed mappings from name to version via configuration
Adds new debian releases for vulnerability feeds and makes new ones configurable without software upgrades
Enterprise UI Changes
Fixes
Adds package path to vulnerability listing table to differentiate findings packages in multiple locations
Report manager timezone string conversion error
The CSV report data for an image that is a descendant of a base image would not show the Inherited From Base column header in the output if the dataset contained items that were false
In the Print Report view for Vulnerabilities in Image Analysis, the appearance of the View Report button was obscuring the values held in the Vulnerability ID column
The Anchore Service Version (previously, Anchore Engine Version) in the About Anchore Enterprise Client modal will now update dynamically if the services are upgraded in the background
12.1.29.30 - Anchore Enterprise Release Notes - Version 3.0.2
Anchore Enterprise 3.0.2
v3.0.2 is a patch release of Anchore Enterprise containing targeted fixes and improvements. No database upgrade is necessary.
A flaw has been discovered in Anchore Enterprise versions 3.0.0 and 3.0.1 that partially effects java software detection and GHSA vulnerability matching. If a container image has java artifacts that are embedded within java artifacts (i.e. jars in jars), AND certain embedded java artifacts have certain forms of malformed metadata, Anchore analysis can fail to report on the top level java artifact and all artifacts embedded within. The fingerprint of this issue is apprent as the SBOM (content) reports from Anchore would show incomplete or missing java packages when compared to the same reports generated from Anchore Enterprise versions prior to 3.0.0. In addition, while Anchore Enterprise uses several vulnerability data feeds when performing matches against java artifacts, another flaw was discovered that prevented Anchore Enterprise from matching java artifacts with records from the GHSA data feed (other feeds, including NVD, Third-party, and OS feeds were still being consulted). The fingerprint for this issue would manifest as missing GHSA matches when compared to results from versions of Anchore Enterprise prior to 3.0.0. Both flaws have been addressed in Anchore Enterprise version 3.0.2.
Enterprise Service Changes
Fixes
Fixes issue where java artifacts are not being matched against records from GHSA feed - synthesize pom properties contents in syft mapper. See https://github.com/anchore/anchore-engine Issue #950
Updates syft to 0.14.0 to fix missing java elements from image SBOM, for embedded java artifacts combined with malformed pom.properties metadata. See https://github.com/anchore/syft Issue #349
Enterprise UI Changes
Fixes
Updates to the security model surrounding the stored data presented in the LDAP mapping management view.
Within System > Accounts, Role dropdowns are no longer truncated by the boundary of the user management dialog and will now display all entries without needing to scroll the list.
Various supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs. Redundant libraries have been removed to reduce the app startup time and overall size.
12.1.29.31 - Anchore Enterprise Release Notes - Version 3.0.1
Anchore Enterprise 3.0.1
v3.0.1 is a patch release of Anchore Enterprise containing targeted fixes and improvements. No database upgrade is necessary.
Enterprise Service Changes
Adds
Adds new “inventory-agent” RBAC role with minimal permissions for use with KAI agents deployed in kubernetes clusters
Adds wildcard support for GraphQL query filters in reporting service
Fixes
Increases the TTL for kubernetes runtime inventory items to 120 days from 1 day after last being seen
Fixes RHEL/UBI images imported to engine using syft have different distro name that skips OS vuln checks by mapping them to the proper “rhel” namespace and adding support for “redhat” namespaces.
Fixes false positive vulnerabilities due to application packages (npm, python, etc) being installed via rpms, debs, or other distro packages. Filters those “owned” packages from analysis leaving the parent rpm/deb/apk for vulnerability matching.
Fixes SSO login not working after upgrade from 2.4.x to 3.0.0 with “invalid_client” error
Fixes analysis failures due to python package manifest format issues causing Syft errors
Fixes image ancestor lookup failures due to ancestor being deleted or not found
Fixes ubuntu feed driver failures during data fetch processing
Enterprise UI Changes
Fixes
Fixes bulk delete image count limitations during repo cleanup
Fixes potential SSO issues due to Redis connection errors
12.1.29.32 - Anchore Enterprise Release Notes - Version 3.0.0
Anchore Enterprise 3.0
This represents a significant update in Enterprise, requiring database upgrades and adds new components to the system including an optional deployable agent for gathering
runtime image inventory from Kubernetes clusters.
New Features
Runtime Kubernetes Inventory
Anchore now can receive inventory reports from a new agent that runs in the kubernetes cluster and reports which images are used in which namespaces. The new agent is KAI, and
it can run within your Anchore deployment or in another kubernetes cluster to return which images are in-use over time. This allows Anchore to show which
images are in use and facilitates focused triage and attention on in-use images to ensure you are addressing security findings in this most critical images first.
Anchore now provides fix suggestions for policy violations and a notification delivery mechanism so you can quickly and conveniently send notifications (email, Slack, MS Teams, Github, Jira, etc) to the image maintainer
so they can take corrective action for policy findings such as updating packages, modifying the Dockerfile, or rebuilding on a new base image.
The UI and API now present stateful alerts that will be raised for policy violations on tags to which you are subscribed for alerts. This raises a clear notification in the UI to help initiate the remediation
workflow and address the violations via the remediation feature. Once all findings are addressed the alert is closed, allowing an efficient workflow for users to bring their image’s into compliance with
their policy.
Improved Pipeline Scanning Integration
Anchore now has the ability to accept SBoM and image metadata from analysis run inside your CI/CD pipelines or local to developer machines and load it into the system for processing without requiring
images to be pushed to an image registry. This enables more efficient scanning inside the pipelines and less data transfer to decrease the overall time to result. Analysis results are provided by Syft which
is integrated into Anchore itself for SBoM generation of packages as well.
A new configurable maximum image size has been added to the system to enable administrators to ensure that very large images are not admitted into Anchore causing potential QoS or resource usage issues.
New capabilities in the analysis archive rules allow more efficient description of what to archive and what to exclude as well as the ability to set rules to limit the number of images in each account to help with capacity management.
These new capabilities include new selectors for exclusions to rules so that broader rules can be use to select candidates for archival and just exclusions set for specific images, tags, or repos.
See [Analysis Archive Rules]
Enterprise Service Changes
In Enterprise 3.0, the system now requires the deployment configuration to explicitly set a default admin password and will fail system initialization if one is not found in the configuration. This is automatically
configured for users of the helm chart and our Quickstart docker-compose.yaml, but if you have a custom deployment template and create a new deployment of Anchore, you must ensure that the default_admin_password field
is set in the config.yaml used by the catalog component.
Added
Adds ability to block large images based on configurable max size
Adds configurable image max size value to optionally limit size of images analyzed by system
Adds corrections API to support artifact metadata corrections for false positive management
Adds kubernetes inventory ingress API and agent to run in kubernetes clusters to track image inventory
Adds additional archive analysis rule exclusions to allow rules to be broader but have specific exclusions so that fewer rules can be used
Adds API to upload analysis SBoM generated by Syft and imported as image for remote analysis without Anchore retrieving the image content directly
Adds CPEs used for vulnerability matching against NVD and VulnDB data to the image content output itself for greater transparency in matching
Use Python 3.8 instead of Python 3.6
Update base image for containers to UBI 8.3 from UBI 8.2
Improved
Improved - Updated output messages and description for vulnerability_data_unavailable trigger and stale_feeds_data trigger to clarify only OS packages impacted.
Improved - Do not allow selectors to be empty unless using max_images_per_account_rule.
Improved - Updates Syft to version 0.12.4 to fix several issues in image analysis including empty package names/versions in invalid package.json files and java jar parent references being Nil.
Improved - Require user to set explicit default admin password at bootstrap instead of defaulting to a value if none found.
Improved - Update Authlib to 0.15.2 from 0.12.1
Improved - Update PyYAML to 5.4.1
Improved - Update Passlib to 1.7.4
Improved - Update to use Python 3.8
Improved - Update base image to UBI 8.3.
Fixes
Fixed - Failed analysis due to incorrect manifest mime types due to bug in buildah that caused incorrect content type to be in the manifest at build time.
Fixed - External API service swagger spec for GetRegistry response is inconsistent with actual returned JSON.
Fixed - Fixed analysis archive rules that did not fire if delete transition rule present.
Fixed - Force re-analysis of tag and digest rejected if create_at_override timestamp not provided.
Additional minor bug fixes and enhancements
Known Issues/Errata
False Positive Management feature incompatible with legacy Engine report/query routes
Using the GET /query/images_by_vulnerability endpoint for querying all images with a specific vulnerability using the legacy Engine API does not support the new False Positives management feature.
It is recommended to use the Enterprise Reporting Service GraphQL API to get the same information that does support the corrections.
Change to now require an explicitly set default admin password at system bootstrap may cause issues with some deployment templates
If your deployment template (chart, docker compose, etc) does not ensure that the config.yaml for services includes default_admin_password explicitly set to a value, the system will no longer
bootstrap with a built-in default. This does not affect our updated helm chart or quickstart docker-compose.yaml files, those have been updated appropriately.
Enterprise UI Changes
Added
New compliance landing page
Alerts view in dashboard
Alerts selection for repositories and tags
Remedation workflow and Actions Workbench
Kubernetes runtime inventory reports
“Last Seen” timestamp in image analysis to overlay runtime inventory in analysis view
12.1.29.33 - Anchore Enterprise Release Notes - Version 2.4.0
Anchore Enterprise 2.4.0
Features & changes of note:
Malware scanning capabilities
Image ancestry and comparison of an image’s policy and vulnerability findings with a base image
New analyzers for binaries not delivered in packages,
A “content hints” capability in the analyzers so developers or image builders can pass metadata to augmentation analysis results
UI improvements for scanning and deleting repositories with warnings for very large repositories
Asynchronous deletion of images
A new enterprise extension to the external API, available with base route: /v1/enterprise/
Malware Scanning
Anchore Enterprise now integrates ClamAV for optional (disabled by default) malware scanning of image content during the analysis phase and with policy rules to trigger on findings. This is particularly useful for using Engine to validate
external images in an image catalog or “golden repo” where you must guard against both vulnerable and malicious code from external sources.
Image Ancestry and Base Image Comparisons for Vulnerabilities and Policy Findings
Anchore now provides an API and UI enhancements to show an image’s base image as well as any images in its ancestry. Using this information, Anchore can now also show which
vulnerabilities and policy findings are inherited from an image’s base. This allows quicker triage, analysis, and remediation of image findings. See Base Images.
New Binary Content Type
A new binary analyzer will check for and inspect binaries that are often installed outside of package managers. This supports a common use-case of language or runtime-specific base images
such as Python and Go images where the runtime is installed via an archive and thus no package db entry exists. The analyzer supports specific binaries that it searches and can get metadata for: Go, Python, and BusyBox.
Once detected, these are checked for vulnerabilities, just like regular packages using the NVD and other non-OS vulnerability sources.
Content Hints
A new “hints” feature allows users to pass specific metadata into the analyzers to help identify and augment content that existing analyzers would not have been discovered. This feature is useful
if you have libraries statically compiled into another binary or installed outside of a package manager that you want to tell Anchore about so you can get vulnerability matches for and include them in the
image’s content manifests. This is accomplished with a specific JSON file present in the image: /anchore_hints.JSON. The entries are merged into the analyzer results to augment their findings for
different content types.
Image deletion is now an asynchronous operation and the image_status property in the image record now has possible states ‘active’ and ‘deleting’. Deletion of an image by API call will
transition the image record to a deleting status as indicated by the image_status property. Images in that state will be deleted by an asynchronous process on a duty cycle. This approach helps manage database
load under a high volume of delete operations and also makes the client-perceived response time much lower.
NOTE: Responses for GET /images and GET /summaries/imagetags do not include images in the deleting state by default, though new query parameters
allow those images to be returned in those calls if desired (image_status=deleting or image_status=all)
New API Extension for Enterprise
The engine API is updated to version 0.1.15
There is also a new enterprise extension to the API, available at /v1/enterprise/ that has its own version (0.1.0) and has calls specific to the Enterprise edition. These calls include
the new base-image comparison and ancestry API calls.
The swagger spec for this is available from the service using the route: /v1/enterprise/swagger.json
Enterprise Service Changes
Added
Changed image deletion to asynchronous behavior to make API more responsive and throttle db load during image deletes.
New dry-run mode for repository scan request to return list of tags that would be scanned without scanning or persisting the record.
Support for a “hints” JSON file in the image. JSON file to pass additional metadata to augment analyzer findings.
Adds API support for deleting multiple images in a single call.
Support for malware scanning using ClamAV and new ‘malware’ content type in API and policy gate to trigger on findings as well as scan not run. Disabled by default.
Support for content type ‘binary’ with analyzers to detect specific binaries: Python, Golang, Busybox not installed by package manager.
Query parameter filters for GET /images calls to filter by image_status and analysis_status.
Ability to indicate which vulnerability findings are inherited from the base image.
Ability to indicate which policy findings are inherited from the base image.
API call to return the ancestor images (parent and base images) for an image.
Improved
Change image analysis queue processing behavior to fair share across accounts.
Removes image_to_get property in GET body of /images route, since body in GET operations is not standard behavior.
Fixes
Handle scratch images correctly in files gate behavior.
Add missing fields in swagger JSON spec for GET /query/vulnerabilities.
Better handling of java packages missing certain metadata in MANIFEST.MF files.
Additional minor bug fixes and enhancements
Enterprise UI Changes
Added
New image count summaries to Image Analysis pages
Warning during “Analyze Repo” workflow if repo to be added has a lot of images, allowing users to cancel operation before the analysis is requested to avoid unintentional workload.
“Whats New” message on initial login and available in the “About Enterprise Client” selection from Account dropdown in top right of screen.
“Binary” content type in Image Contents tab
“Golang” content type in Image Contents tab
“Malware” content type in Image Contents tab
Ability to cancel all pending analyses from a single repository
Download for report preview data as JSON and CSV
Show image’s base image in Image Overview page
Shows which vulnerabilities are inherited from the base image in the Vulnerabilities view
Shows which policy findings are inherited from the base image in the Compliance view
Images can be deleted via the UI
Repository deletion (deletes all image analyses for that repository)
Improved
Order accounts by name in Associate Accounts pop-up
Analysis status is its own column to allow sorting by analysis status in repository view of tags
Adds total image counts to repository view, main page, and tag listing
Improved error handling in GraphQL responses for Reports
Custom control for relative time filters in reporting
Fixes
Some table cell truncations for image digests and other fields in the Image Analysis tab
Changelog showing entries when no change is apparent
Re-analysis of images that failed analysis
Version mismatch after container restart
Results shown after whitelisting an item with an inactive bundle
Built on Anchore Engine v0.8.1: Anchore Enterprise is built on top of the open-source Anchore Engine, which has received new features and updates as well See Anchore Engine Release Notes for information on new features, bug fixes, and improvements in Anchore Engine.
12.1.29.34 - Anchore Enterprise Release Notes - Version 2.4.1
Anchore Enterprise 2.4.1
v2.4.1 is a patch release of Anchore Enterprise containing targeted fixes and improvements. No database upgrade is necessary.
Enterprise Service Changes
Added
Ability to set pool_recycle and other SQLAlchemy engine parameters via config in db_engine_args section of config.yaml
Updates image build to support dynamic UID mapping in OpenShift
Fixes
RedHat CVE normalization in feed service did not ensure only one fix record per vulnerability in all cases
Orphaned feed service driver tasks left in ‘running’ when the system shuts down are now cleaned up on restart and marked as failed
Fixes small size limit of data scanned by ClamAV, adds default 4GB max size and configuration options to make it smaller. Errors if image is larger than that (ClamAV does no support larger sizes)
Report ClamAV malware findings that do not include a file path as ‘unknown’ rather than skipping
Policy engine should return HTTP 400 with instructive message on invalid bundle upload instead of HTTP 500
Vulnerability fix version not correct for vulnerabilities with multiple fixes
NPM and Gem packages not matching GHSA sources properly
Update urllib3 to 1.25.9 to address CVE-2020-26137 even though Anchore not affected by that issue
Deactivating or deleting repo subscription does not halt in-progress repository scans and can result in analysis being added after the subscription is removed
Additional minor bug fixes and enhancements
Enterprise UI Changes
Improved
Only show enabled feeds and groups via system health
Updates image build to support dynamic UID mapping in OpenShift
Fixes
Repositories with no watch subscription can cause UI errors during deletion
Lack of error message in case of creating/updating password with value that is too short to pass validation
Built on Anchore Engine v0.8.2: Anchore Enterprise is built on top of the open-source Anchore Engine, which has received new features and updates as well See Anchore Engine Release Notes for information on new features, bug fixes, and improvements in Anchore Engine.
12.1.29.35 - Anchore Enterprise Release Notes - Version 2.3.2
Anchore Enterprise 2.3.2
Adds features as well as bug fixes and improvements. Highlighted features are: new parameters in the Reporting service’s GraphQL API for specifying time ranges using a relative window (e.g last 30 days), and a new CVE blacklisting rule in the policy language to trigger if specific CVEs are found.
Improved
Improved - Adds retry wrapper on image download operations on analyzer. Implements #483
Improved - Updates serialize-javascript dependency to 4.0.0 to bring in fix for CVE-2020-7660 (Anchore unaffected)
Improved - Adds HEALTHCHECK to UI image
Improved - Removes npm installation from UI image to remove all the unused artifacts it brings in
Bug Fixes
Fix - Adds release to version string for all os package types if one is present. Fixes #504
Fix - Fixes global analysis archive rule application for non-admin accounts. Fixes #503
Fix - Fixes LDAP service tab fails if account mappings cannot be retrieved from service API
Fix - Fixes multiple vulnerability fix records from RHEL driver by collapsing to a single fix for correct semantics
Fix - Updates Alpine SecDB driver to use new source (https://secdb.alpinelinux.org) for data and new download process. Adds Alpine 3.12 support
Fix - Some db tables not created correctly for certain upgrade paths
Additional minor bug fixes and enhancements
Built on Anchore Engine v0.7.3: Anchore Enterprise is built on top of the OSS Anchore Engine, which has received new features and updates as well See Anchore Engine Release Notes for information on new features, bug fixes, and improvements in Anchore Engine
12.1.29.36 - Anchore Enterprise Release Notes - Version 2.3.0
This release focuses on enabling the Microsoft ecosystem within Anchore to allow the same analysis flow and pipelines that you use for linux images to be applied to Windows images
as well for a consistent approach across ecosystems. It also includes several enhancements to the reporting and event management features of the UI.
New Features
Windows Container Image Support
Analyze and get vulnerabilities for Windows OS-based containers. Anchore ingresses Microsoft vulnerability data via the MSRC
No requirement to run Anchore itself on windows or other changes to the infrastructure needed to deliver this feature
NuGet/.NET Package Support (Tech Preview)
Detection and inclusion in analysis output as well as vulnerability scans
GitHub Advisories vulnerability data
See Configuring GitHub advisories for information on configuring the new feed including creating a GitHub token the driver can use for API calls to GitHub.
Scheduled Reports
Create report templates for easy re-use of your most frequently used reports
Schedule reports for generation and get notifications when they are ready, delivered via Slack, email, webhooks, and the other supported notification integrations Enterprise provides.
Event Management in the UI
Improved sorting, filtering, and deletion of events in the UI directly
Improved RHEL/CentOS vulnerability matching using CVE-based feeds instead of RHSA-based data
To help provide early detection of vulnerabilities before a fix is available or for issues where a fix is not issued, Anchore now uses RedHat’s CVE information instead of RHSA information
This also provides improved whitelist consistency between RHEL/Centos and images based on other distros since CVEs are consistent
Improved feed data and configuration management via APIs and CLI
New APIs and CLI commands allow dynamic configuration of which feeds to sync and the ability to enable/disable and delete feed data without updating configuration files or restarting containers.
Built on Anchore Engine v0.7.1: Anchore Enterprise is built on top of the OSS Anchore Engine, which has received new features and updates in the 0.7 series. See Anchore Engine Release Notes for information on new features, bug fixes, and improvements in Anchore Engine for versions v0.7.0 and v0.7.1.
Changes
Starting in 2.3.0 all services except the UI in an Enterprise deployment must:
Have the license.yaml available in /license.yaml inside the image. This is currently how the Notifications, Reports, and RBAC services are run, and is now extended to all services.
Be started with the anchore-enterprise-manager command instead of anchore-manager. This ensures that enterprise extensions and functionality is properly loaded and available.
The docker-compose.yaml is no longer built into the image, but is available in the Docker Compose guide via a link to download. The image versions will be set to the release version matching the documentation version.
These changes are all configured by default in the new Docker Compose guide and are also enabled in the updated Helm chart for this release.
As with previous releases, we recommend upgrading with the newest deployment templates rather than just changing the image references in existing templates.
Bug Fixes and Enhancements
Fixed user deletion and role removal failures
Uses NVD severity for Debian vulnerabilties when ‘urgency’ field not set in the upstream data
Updates alpine feed driver to ensure severies are set using newer nvd2 driver data instead of older nvd driver that may have had stale data due to old NVD XML feed
Adds new ‘–no-auto-upgrade’ option to anchore-enterprise-manager to start services that will not upgrade the db automatically, enabling more control over the upgrade process
Fixed Report CSV/JSON download missing records in UI
Fixed scrollbar functionality issue in Policy Bundle editor in UI
Fixed missing scrollbar for context switching in UI
Fixed problem with sorting vulnerability columns in UI causing hangs and missing links
This is a significant upgrade. Backups should be taken, and downtime expected to complete the process.
NOTE The upgrade from 2.2.x to 2.3.0 will take several minutes at least for the database schema upgrade and involves a data migration can take longer to fully transition the RHSA data to CVE data. Part of this process is done during
the database upgrade, but part of the process can only complete after the upgraded feed service is able to run and sync the new RedHat CVE data. Because of this, there will be an interval where RHEL-based images
will have no vulnerabilities listed. That will automatically resolve itself once the feed syncs, and all affected images will have CVE-based vulnerability matches as expected, but depending on deployment environment and number
of images in the database, this may take a long time (hours potentially).
To upgrade, use the new version of the Helm chart or docker compose provided with this release. The new chart and compose files contain all needed configuration changes. See Enterprise Upgrade to 2.3.0 for details on this specific upgrade process and how to update your own deployment templates if you are not using the official Helm chart.
12.1.29.36.1 - RHSA to CVE Feed Changes for RHEL-Based Images
Starting in Enterprise 2.3.0, Anchore Enterprise uses the RedHat Security API for CVEs for vulnerability matches for RHEL, CentOS, and UBI images. This
is a change from previous releases that utilized the API for Advisories (RHSAs) instead.
What Changed
In short, rhel:* replaces centos:* in the vulnerability feed for matches against RHEL-based distros such as CentOS and UBI.
Specifically, in Enterprise 2.2.x, all RHEL-based images (CentOS, RHEL, UBI) used data from the RedHat Security Advisories API. This data populated
the centos:* groups of the vulnerabilities feed, as seen when you run anchore-cli system feeds list or via the UI’s system page showing feed syncs.
Changed for Enterprise 2.3.0, RHEL-based images will match against a new feed source by default: data from the RedHat CVE API .
This new source populates the rhel:* groups of the vulnerabilities feed. The centos:* groups are no longer used for matches by default.
Reason for Change
The CVE source provides the ability to match vulnerabilities that have not yet been fixed upstream or via backports by Redhat as well as information on
vulnerabilities that will not be fixed. Both of these classes of vulnerability are not covered in the RHSA data because that data is generated by fix
releases. Overall, the change gives better matches earlier in the vulnerability triage and fix process so you can make better decisions about issues
that affect your images.
Upgrade
During upgrade Anchore will change the matching logic to transition images to use the new feed groups. This update involves:
Completed Automatically During DB Upgrade:
Updating db schema to support new enable/disable flags for feeds and groups.
Disabling the existing centos:* feed groups from future syncs by setting the groups to disabled status.
Updating the internal mappings for distros to use the new groups.
When the system starts, all RHEL/CentOS/UBI images will still have RHSA matches, but the centos:* groups will be disabled so no new updates arrive for those groups.
After upgrade, when the system is running the new version:
Feed service will sync the new data from the source
Policy engine syncs from feed service to get new data
Once the rhel:* groups sync in the policy engine, all RHEL/CentOS/UBI pre-upgrade analyzed images will now show both CVE and RHSA matches.
Images analyzed after the upgrade will only match CVEs.
The output from a CLI feed listing should look roughly like (note the disabled centos groups and synced rhel groups:
You can optionally flush the old RHSA matches by using the anchore-cli to delete the centos group data, which will remove the both the feed data and vulnerability matches for the RHSAs, leaving only the CVE matches.
To accomplish this, via the cli run:
[anchore@c4799ee0b36e enterprise]$ anchore-cli system feeds delete vulnerabilities --group centos:5
Group LastSync RecordCount
centos:5(disabled) pending 0
[anchore@c4799ee0b36e enterprise]$ anchore-cli system feeds delete vulnerabilities --group centos:6
Group LastSync RecordCount
centos:6(disabled) pending 0
[anchore@c4799ee0b36e enterprise]$ anchore-cli system feeds delete vulnerabilities --group centos:7
Group LastSync RecordCount
centos:7(disabled) pending 0
[anchore@c4799ee0b36e enterprise]$ anchore-cli system feeds delete vulnerabilities --group centos:8
Group LastSync RecordCount
centos:8(disabled) pending 0
At this point all RHSA matches for all images in the DB have also been removed, leaving only the CVE matches from the new RedHat CVE source.
Feed Service Driver Configuration
The new RHEL CVE feed is enabled in the feed service by default. No changes to configuration are necessary to enable it.
Policy Engine Configuration
No changes to the policy engine configuration are needed to enable the new data because it is delivered as new groups in the existing vulnerabilities feed,
which syncs all groups automatically.
Rolling Back
If you need to restore the old behavior see the rollback guide
12.1.29.36.2 - Reverting Back to use RHSA Data
NOTE: This section is only for very specific situations where you absolutely must revert the matching system to use the RHSA data. This should not be done lightly. The newer CVE-based data is more accurate, specific, and provides a more consistent experience with other distros.
If your processing of anchore output relies on RHSA keys as vulnerability matches, or you have large RHSA-based whitelists that cannot be converted to CVE-based,
then it is possible, though not recommended, to migrate your system back to using the RHSA-based feeds (centos:* groups).
Here is the process. It requires the Anchore CLI with access to the API as well as direct access to the internal policy engine API endpoint. That may require a docker exec or kubectl exec call
to achieve and will be deployment/environment specific.
Revert the distro mapping records that map centos, fedora, and rhel to use the RHEL vuln data.
With API access to the policy engine directly (output omitted for brevity), remove the existing distro mappings to RHEL data. These are the used by Anchore:
Note: if something went wrong and you want to undo the progress you’ve made, just make the same set of calls as the last two steps in the same order but with the to_distro values set to ‘rhel’.
Now, ensure you are back where you have access to the main Anchore API and the Anchore CLI installed. Disable the existing rhel feed groups
anchore-cli system feeds config vulnerabilities --disable --group rhel:5
anchore-cli system feeds config vulnerabilities --disable --group rhel:6
anchore-cli system feeds config vulnerabilities --disable --group rhel:7
anchore-cli system feeds config vulnerabilities --disable --group rhel:8
anchore-cli system feeds delete vulnerabilities --group rhel:8
anchore-cli system feeds delete vulnerabilities --group rhel:7
anchore-cli system feeds delete vulnerabilities --group rhel:6
anchore-cli system feeds delete vulnerabilities --group rhel:5
Enable the centos feed groups that have the RHSA vulnerability data
anchore-cli system feeds config vulnerabilities --enable --group centos:8
anchore-cli system feeds config vulnerabilities --enable --group centos:7
anchore-cli system feeds config vulnerabilities --enable --group centos:6
anchore-cli system feeds config vulnerabilities --enable --group centos:5
NOTE: if you already have centos data in your feeds (verify with anchore-cli system feeds list) then you’ll need to delete the centos data groups as well
to ensure a clean re-syncin the next steps. This is accomplished with:
anchore-cli system feeds delete vulnerabilities --group centos:5
anchore-cli system feeds delete vulnerabilities --group centos:6
anchore-cli system feeds delete vulnerabilities --group centos:7
anchore-cli system feeds delete vulnerabilities --group centos:8
Now do a sync to re-match any images using rhel/centos to the RHSA data
[root@d64b49fe951c ~]# anchore-cli system feeds sync
WARNING: This operation should not normally need to be performed except when the anchore-engine operator is certain that it is required - the operation will take a long time (hours) to complete, and there may be an impact on anchore-engine performance during the re-sync/flush.
Really perform a manual feed data sync/flush? (y/N)y
Feed Group Status Records Updated Sync Duration
github github:composer success 0 0.28s
github github:gem success 0 0.34s
github github:java success 0 0.33s
github github:npm success 0 0.23s
github github:nuget success 0 0.23s
github github:python success 0 0.29s
nvdv2 nvdv2:cves success 0 60.59s
vulnerabilities alpine:3.10 success 0 0.27s
vulnerabilities alpine:3.11 success 0 0.31s
vulnerabilities alpine:3.3 success 0 0.31s
vulnerabilities alpine:3.4 success 0 0.25s
vulnerabilities alpine:3.5 success 0 0.26s
vulnerabilities alpine:3.6 success 0 0.25s
vulnerabilities alpine:3.7 success 0 0.26s
vulnerabilities alpine:3.8 success 0 0.35s
vulnerabilities alpine:3.9 success 0 0.28s
vulnerabilities amzn:2 success 0 0.26s
vulnerabilities centos:7 success 1003 34.91s
vulnerabilities centos:8 success 199 9.15s
vulnerabilities debian:10 success 2 0.50s
vulnerabilities debian:11 success 4 60.53s
vulnerabilities debian:7 success 0 0.30s
vulnerabilities debian:8 success 3 0.34s
vulnerabilities debian:9 success 2 0.38s
vulnerabilities debian:unstable success 4 0.39s
vulnerabilities ol:5 success 0 0.31s
vulnerabilities ol:6 success 0 0.29s
vulnerabilities ol:7 success 0 0.41s
vulnerabilities ol:8 success 0 0.28s
vulnerabilities rhel:5 success 0 0.28s
vulnerabilities rhel:6 success 0 0.43s
vulnerabilities ubuntu:12.04 success 0 0.45s
vulnerabilities ubuntu:12.10 success 0 0.25s
vulnerabilities ubuntu:13.04 success 0 0.24s
vulnerabilities ubuntu:14.04 success 0 0.37s
vulnerabilities ubuntu:14.10 success 0 0.25s
vulnerabilities ubuntu:15.04 success 0 0.42s
vulnerabilities ubuntu:15.10 success 0 0.23s
vulnerabilities ubuntu:16.04 success 0 0.35s
vulnerabilities ubuntu:16.10 success 0 0.33s
vulnerabilities ubuntu:17.04 success 0 0.33s
vulnerabilities ubuntu:17.10 success 0 0.31s
vulnerabilities ubuntu:18.04 success 0 0.42s
vulnerabilities ubuntu:18.10 success 0 0.37s
vulnerabilities ubuntu:19.04 success 0 0.45s
vulnerabilities ubuntu:19.10 success 0 0.32s
[root@d64b49fe951c ~]# anchore-cli image vuln centos os
Vulnerability ID Package Severity Fix CVE Refs Vulnerability URL Type Feed Group Package Path
RHSA-2020:0271 libarchive-3.3.2-7.el8 High 0:3.3.2-8.el8_1 CVE-2019-18408 https://access.redhat.com/errata/RHSA-2020:0271 rpm centos:8 pkgdb
RHSA-2020:0273 sqlite-libs-3.26.0-3.el8 High 0:3.26.0-4.el8_1 CVE-2019-13734 https://access.redhat.com/errata/RHSA-2020:0273 rpm centos:8 pkgdb
RHSA-2020:0575 systemd-239-18.el8_1.1 High 0:239-18.el8_1.4 https://access.redhat.com/errata/RHSA-2020:0575 rpm centos:8 pkgdb
RHSA-2020:0575 systemd-libs-239-18.el8_1.1 High 0:239-18.el8_1.4 https://access.redhat.com/errata/RHSA-2020:0575 rpm centos:8 pkgdb
RHSA-2020:0575 systemd-pam-239-18.el8_1.1 High 0:239-18.el8_1.4 https://access.redhat.com/errata/RHSA-2020:0575 rpm centos:8 pkgdb
RHSA-2020:0575 systemd-udev-239-18.el8_1.1 High 0:239-18.el8_1.4 https://access.redhat.com/errata/RHSA-2020:0575 rpm centos:8 pkgdb
Note in the last command output that the OS vulnerabilities are again showing ‘RHSA’ matches. The restoration to RHSA-based vuln data is complete.
12.1.29.37 - Anchore Enterprise Release Notes - Version 2.3.1
Anchore Enterprise 2.3.1
Adds features as well as bug fixes and improvements. Highlighted features are: new parameters in the Reporting service’s GraphQL API for specifying time ranges using a relative window (e.g last 30 days), and a new CVE blacklisting rule in the policy language to trigger if specific CVEs are found.
Added
Added - New reporting GraphQL parameters for relative time-windows on reports (e.g. last 30 days) as alternative to absolute date ranges for all queries with start/end parameters.
Added - CVE Blacklisting via new policy rule.
Added - New “licenses” field in API response for content (pkgs etc) that is an array type for easier parsing. Supplements existing “license” field that is a comma-delimited list in a single string.
Added - Configuration option to disable Repository Add feature in UI
Added - Support for custom links/content on UI login page
Improved
Improved - Update base docker image to UBI 8.2 from 8.1.
Improved - Faster rhel feed driver execution via parallelized data download
Improved - Renamed “Registries” to “Registry Credentials” for more clarity in UI
Improved - Ask user if they are sure they want to add a full repository of tags in UI to prevent accidental add of large numbers of tags in UI
Improved - Alphabetical ordering of account data in context list in UI
Improved - Ability to copy and paste full tag string in image analysis page in UI
Improved - Enable vulnerabilities tab even if image has no vulnerabilities in UI
Bug Fixes
Fix - Ability to whitelist VulnDB IDs in policy editor in UI
Fix - PDF generation from vulnerabilities fix to not be truncated in UI
Fix - Protect against LDAP service tab whitescreen based on service response in UI
Fix - Fixes previewing saved query with invalid filter value shows nothing in UI
Fix - Fix formatting error on compliance report in UI
Fix - Filter entry persists between tabs in UI
Fix - Fixes error viewing vulnerabilities for .NET Core images
Fix - Fixes payload handling for MS Teams integration for notifications.
Fix - Fixes rhel feeds driver to handle updates in upstream data properly
Fix - package_type parameter now handles GHSA matches correctly as non-os types.
Fix - Correctly finds java content in cases where file permissions prevent a read.
Fix - Updates pyyaml dependency to 5.3.1.
Fix - Updates several npm dependencies of the UI.
Fix - Fixes API documentation in swagger spec for registry digest-style POST /image call.
Fix - Fixes db upgrade failure during upgrades of deployments that still have the ’nvd’ data from the deprecated driver.
Additional minor bug fixes and enhancements
Built on Anchore Engine v0.7.2: Anchore Enterprise is built on top of the OSS Anchore Engine, which has received new features and updates as well See Anchore Engine Release Notes for information on new features, bug fixes, and improvements in Anchore Engine
12.1.29.38 - Anchore Enterprise Release Notes - Version 2.2.0
Anchore Enterprise 2.2.0
Building upon the Anchore Enterprise 2.0 release, Anchore Enterprise 2.2 adds major new features and architectural updates that extend integration / deployment options, security insights, and the evaluation power available to all users.
New Features
Integration with Github, Jira, Slack and Microsoft Teams: Anchore Enterprise Notifications is a new capability offered in version 2.2, bringing the ability to flexibly configure your Enterprise deployment to send proactive system, user, and workload level notification events to a variety of third party systems.
System Dashboard and Feed Sync Status: New system dashboard in the Enterprise GUI which makes it easier to review the status of your Anchore Enterprise deployment, troubleshoot issues and understand the roles of the various services.
Harbor Support: Anchore Enterprise 2.2 is fully supported by the latest release of the CNCF’s Harbor project (v1.10++) – an open source container and artifact registry.
Built on Anchore Engine v0.6: Anchore Enterprise is built on top of the OSS Anchore Engine, which has received new features and updates as well See Anchore Engine Release Notes for information on new features, bug fixes, and improvements in Anchore Engine
12.1.29.39 - Anchore Enterprise Release Notes - Version 2.1.0
Anchore Enterprise 2.1.0
Building upon the Anchore Enterprise 2.0 release, Anchore Enterprise 2.1 adds major new features and architectural updates that extend integration / deployment options, security insights, and the evaluation power available to all users.
New Features
GUI report enhancements: Leveraging Anchore Enterprise’s reporting service, there is a new set of configurable queries available within the Enterprise GUI Reports control. Users can now generate filtered reports (tabular HTML, JSON, or CSV) that contain image, security, and policy evaluation status for collections of images.
Single-Sign-On (SSO): Integration support for common SSO providers such as Okta, Keycloak, and other Enterprise IDP systems, in order to simplify, secure, and better control aspects of user management within Anchore Enterprise
Enhanced authentication methods: SAML / token-based authentication for API and other client integrations
Enhanced vulnerability data: Inclusion of third party vulnerability data feeds from Risk Based Security (VulnDB) for increased fidelity, accuracy, and live-ness of image vulnerability scanning results, available for all existing and new images analyzed by Anchore Enterprise
Policy Hub GUI: View, list and import pre-made security, compliance and best-practices policies hosted on the open and publicly available Anchore Policy Hub
Built on Anchore Engine v0.5: Anchore Enterprise is built on top of the OSS Anchore Engine, which has received new features and updates as well See Anchore Engine Release Notes for information on new features, bug fixes, and improvements in Anchore Engine
12.1.29.40 - Anchore Enterprise Release Notes - Version 2.0.0
Anchore Enterprise 2.0.0
Building on top of the existing Anchore Enterprise 1,2 release, Anchore Enterprise version 2.0 adds major new features and architectural updates. The overarching purpose of the new features and design of the 2.0 version of Anchore Enterprise is to directly address the challenges of continued growth and scale by extending the enterprise integration capabilities of Anchore, establishing an architecture that grows alongside our users’ demanding throughput and scale requirements, and offering even more insight into users’ container image environments through rich new APIs and reporting capabilities, all in addition to the rich set of enforcement capabilities included with Anchore Enterprise’s flexible policy engine.
New Features
GUI Dashboard: new configurable landing page for users of the Enterprise UI, presenting complex information summaries and metrics time series for deep insight into the collective status of your container image environment.
Enterprise Reporting Service: entirely new service that runs alongside existing Anchore Enterprise services that exposes the full corpus of container image information available to Anchore Engine via a flexible GraphQL interface
LDAP Integration: Anchore Enterprise can now be configured to integrate with your organization’s LDAP/AD identity management system, with flexible mappings of LDAP information to Anchore Enterprise’s RBAC account and user subsystem.
Red Hat Universal Base Image: all Anchore Enterprise container images have been re-platformed atop the recently announced Red Hat Universal Base Image, bringing more enterprise-grade software and support options to users deploying Anchore Enterprise in Red Hat environments.
Anchore Engine v0.4: Anchore Enterprise is built on top of the OSS Anchore Engine, which has received many new features and updates as well. See Anchore Engine Release Notes for information on new features, bug fixes, and improvements in Anchore Engine
Upgrading from Anchore Enterprise 1.2
If using the trial docker compose method, or the production Helm chart method of deploying Anchore Enterprise, upgrading from 1.2 to 2.0 follows the normal upgrade procedure for Anchore Enterprise. However, if you are deploying Anchore Enterprise manually or using another orchestration environment, there are new dependencies and considerations to take into account for deploying Enterprise 2.0. Please visit the upgrade section for more information.
12.3.1 - AnchoreCTL Release Notes - Version 5.18.0
Note: AnchoreCTL v5.18.x versions are compatible with Enterprise v5.18.x deployments.
AnchoreCTL v5.18.0
Improvements
The command anchorectl image sbom <digest> -o cyclonedx-json now supports an additional flag -x which will remove
file entries in the SBOM. This flag is only available for the CycloneDX output format.
Various package updates to improve security and performance.
Fixes
The image add command now correctly handles the case when the user specifies --from <source> and the
name of the source is also a local file or directory name.
12.3.2 - AnchoreCTL Release Notes - Version 5.17.0
Note: AnchoreCTL v5.17.x versions are compatible with Enterprise v5.17.x deployments.
AnchoreCTL v5.17.0
Improvements
Various package updates to improve security and performance.
Fixes
When using distributed analysis, AnchoreCTL will correctly identify RHEL-based images that contain only a /etc/redhat-release file.
12.3.3 - AnchoreCTL Release Notes - Version 5.16.0
Note: AnchoreCTL v5.16.x versions are compatible with Enterprise v5.16.x deployments.
AnchoreCTL v5.16.0
Improvements
Various package updates to improve security and performance.
Fixes
Fixes a failure to download an SBOM added via distributed analysis in the SPDX or CycloneDX format.
12.3.4 - AnchoreCTL Release Notes - Version 5.15.1
Note: AnchoreCTL v5.15.x versions are compatible with Enterprise v5.15.x deployments.
AnchoreCTL v5.15.1
Fixes
Fixes image command failures that occurred when your account has images larger than 4GB
anchorectl image list
anchorectl image get
anchorectl image content
12.3.5 - AnchoreCTL Release Notes - Version 5.15.0
Note: AnchoreCTL v5.15.x versions are compatible with Enterprise v5.15.x deployments.
AnchoreCTL v5.15.0
Improvements
New Command anchorectl auth set-password provides a user the ability to set their own password.
Various package updates to improve security and performance.
Fixes
The parent digest is now correctly represented for fat manifests when using the command anchorectl image add <repo:tag> --from registry.
12.3.6 - AnchoreCTL Release Notes - Version 5.14.0
Note: AnchoreCTL v5.14.x versions are compatible with Enterprise v5.14.x deployments.
AnchoreCTL v5.14.0
Improvements
Commands will now return the usage string when an invalid command is entered.
Command anchorectl images vuln <image> -o json-raw is now available to output raw JSON data for vulnerabilities.
Commands which display archive rules with the “Exclude Last Seen” option set will now display this value.
anchorectl archive rule add
anchorectl archive rule list
anchorectl archive rule get <rule id>
Improved the help text for command anchorectl image check to clarify when the --tag option is required.
New command to help delete inactive system integration health data.
anchorectl system integration health delete <uuid>
Command anchorectl system smoke-tests run now supports --image flag so the user can specify their own image to use for the test. This is helpful for users in air-gapped environments.
Fixes
The command anchorectl airgap feed upload is now functioning properly when executed on a Windows system.
12.3.7 - AnchoreCTL Release Notes - Version 5.13.0
Note: AnchoreCTL v5.13.x versions are compatible with Enterprise v5.13.x deployments.
AnchoreCTL v5.13.0
Improvements
Updated to use Syft v1.17.0
12.3.8 - AnchoreCTL Release Notes - Version 5.12.0
Note: AnchoreCTL v5.12.x versions are compatible with Enterprise v5.12.x deployments.
AnchoreCTL v5.12.0
Improvements
The command anchorectl user add now supports all RBAC Roles.
The command anchorectl usergroup role add now supports all RBAC Roles except the account-view which is restricted from addition into an usergroup.
Provide better error messages when downloading datasets or uploading datasets to Anchore Enterprise while in air-gapped environments.
The air gapped workflow has been improved to download and upload the EPSS dataset.
12.3.9 - AnchoreCTL Release Notes - Version 5.11.0
Note: AnchoreCTL v5.11.x versions are compatible with Enterprise v5.11.x deployments.
AnchoreCTL v5.11.0
Improvements
With the addition of integration health updates in Enterprise v5.11.0, the following command will provide you
data on the health of the integration and Anchore Enterprise:
New command anchorectl system integration list to list all the integrations registered with the system.
New command anchorectl system integration get <UUID> to get the details of a specific integration.
Fixes
The event list command can now support filtering events by the resource-id of the event.
Example: anchorectl event list --resource-id grypedb
The anchorecl system smoke-tests command now correctly returns a non-zero exit code when a test fails. The test has also been updated to use an image with known vulnerabilities.
12.3.10 - AnchoreCTL Release Notes - Version 5.10.1
Note: AnchoreCTL v5.10.x versions are compatible with Enterprise v5.10.x deployments.
AnchoreCTL v5.10.1
Fixes the command anchorectl system smoke-tests run
12.3.11 - AnchoreCTL Release Notes - Version 5.10.0
Note: AnchoreCTL v5.10.x versions are compatible with Enterprise v5.10.x deployments.
AnchoreCTL v5.10.0
AnchoreCTL has been updated to support the new Data Syncer service. AnchoreCTL has been enhanced to handle Air Gapped imports of datasets with the data syncer service.
Updated Commands:
anchorectl feeds list: List all available feeds, this list now includes other datasets like CISA KEV and ClamAV Malware signatures.
anchorectl feeds sync: Sync all feeds, this command will sync all available feeds.
New Commands
anchorectl airgap feed download: Download all feeds for air-gapped environments.
anchorectl airgap feed upload: Import the downloaded feeds into Enterprise.
12.3.12 - AnchoreCTL Release Notes - Version 5.9.1
Note: AnchoreCTL v5.9.x versions are compatible with Enterprise v5.9.x deployments.
AnchoreCTL v5.9.1
Fixes the command anchorectl system smoke-tests run
12.3.13 - AnchoreCTL Release Notes - Version 5.9.0
Note: AnchoreCTL v5.9.x versions are compatible with Enterprise v5.9.x deployments.
AnchoreCTL v5.9.0
A feature and bug fix release which includes:
The command anchorectl repo add <repo name> now supports the --exclude-existing-tags flag. When set, this flag will exclude tags that are already present in the repository. Only newly created tags will be added to the Enterprise system.
Various supporting libraries have been updated in order to improve security.
12.3.14 - AnchoreCTL Release Notes - Version 5.8.1
Note: AnchoreCTL v5.8.x versions are compatible with Enterprise v5.8.x deployments.
AnchoreCTL v5.8.1
Various supporting libraries have been updated in order to improve security.
12.3.15 - AnchoreCTL Release Notes - Version 5.8.0
Note: AnchoreCTL v5.8.x versions are compatible with Enterprise v5.8.x deployments.
AnchoreCTL v5.8.0
A feature and bug fix release which includes:
Improves an error message when deleting images without a force flag.
Fixed an issue that prevented images from being analyzed when the cataloger scope was set to Scoped or AllLayers.
Various supporting libraries have been updated in order to improve security.
12.3.16 - AnchoreCTL Release Notes - Version 5.7.0
Note: AnchoreCTL v5.7.x versions are compatible with Enterprise v5.7.x deployments.
AnchoreCTL v5.7.0
A feature and bug fix release which includes:
Cataloger scope specified from the configuration file is now respected during the image content command.
Improvements to golang release version extraction from go binary ldflags.
Various supporting libraries have been updated in order to improve security.
12.3.17 - AnchoreCTL Release Notes - Version 5.6.0
Note: AnchoreCTL v5.6.x versions are compatible with Enterprise v5.6.x deployments.
AnchoreCTL v5.6.2
A maintenance release which includes:
Updates to the Syft version of v1.5.0
AnchoreCTL v5.6.1
A bug fix release which includes:
Fails the creation of a user within the admin account when an RBAC Role is specified. If the user is not being created in the admin account, the default RBAC Role is read-write unless otherwise specified.
AnchoreCTL v5.6.0
A feature and bug fix release which includes:
The addition of a system smoke-tests run command. This can be used as a tool to aid the assessment of the health of your Anchore
Enterprise deployment by executing a few basic operations.
The command requires the caller to have admin credentials.
The command does not have the ability to assess the health of the feed service, the report service, or the notification service.
The command feed list now includes the Last Updated column which is the last successful update time of the specific feed groups.
Updates the system artifact-lifecycle-policy commands to expose a new policy condition which allows for the preservation of base images.
Improved an error message during creation of a user within the admin account when an RBAC Role is specified.
Various supporting libraries have been updated in order to improve security.
12.3.18 - AnchoreCTL Release Notes - Version 5.5.0
The latest version of AnchoreCTL is 5.5.0.
Note: AnchoreCTL v5.5.x versions are compatible with Anchore Enterprise v5.5.x deployments.
AnchoreCTL v5.5.0 is a maintenance release
Various supporting libraries have been updated in order to improve security
12.3.19 - AnchoreCTL Release Notes - Version 5.4.0
The latest version of AnchoreCTL is 5.4.0.
Note: AnchoreCTL v5.4.x versions are compatible with Anchore Enterprise v5.4.x deployments.
AnchoreCTL v5.4.0 is a feature and bug fix release which includes:
RBAC Role Support
Addition of the following commands that are accessible by users with admin, account-user-admin, or full-control.
anchorectl system role list - returns the list of supported RBAC Roles.
anchorectl system role get <rbac role name> - returns description and list of permissions of the specified role.
User Group Support
Commands for the management of User Groups
anchorectl usergroup add <usergroup name or uuid> [--description <string>]
anchorectl usergroup delete <usergroup name or uuid>
anchorectl usergroup role add <usergroup name> <account name> --role <rbac role name>
anchorectl usergroup role delete <usergroup name> <account name> --role <rbac role name>
anchorectl usergroup role list <usergroup name>
anchorectl usergroup user add <usergroup name> --user <username>
anchorectl usergroup user delete <usergroup name> --user <username>
anchorectl usergroup user list <usergroup name>
anchorectl system wait command now defaults to waiting only on the Enterprise API Service. The –services flag can be used to specify other services that should be waited on as well.
Return the image content even when the parent digest is being used for the request. This was seen in a error in anchorectl image content.
Various supporting libraries have been updated in order to improve security
12.3.20 - AnchoreCTL Release Notes - Version 5.3.0
The latest version of AnchoreCTL is 5.3.0.
Note: AnchoreCTL v5.3.x versions are compatible with Anchore Enterprise v5.3.x deployments.
AnchoreCTL v5.3.0 is a feature and bug fix release which includes:
Enable the dotnet-deps-cataloger for image analysis
Various supporting libraries have been updated in order to improve security
12.3.21 - AnchoreCTL Release Notes - Version 5.2.0
The latest version of AnchoreCTL is 5.2.0.
Note: AnchoreCTL v5.2.x versions are compatible with Anchore Enterprise v5.2.x deployments.
AnchoreCTL v5.2.0 is a feature and bug fix release which includes:
Adds the ability to delete runtime inventory with inventory delete.
Adds the ability for admins to edit the email field of accounts with account update.
Addresses an exception in the system artifact-lifecycle-policy update command when the policy uuid was not provided.
Adds a new field, password_last_updated, to the response of user list and user get commands.
image content command correctly displays the licenses property in the response.
image vuln command provides an optional flag, --include-description, that is available with the json output format. Using this flag will include the description for each vulnerability listed.
12.3.22 - AnchoreCTL Release Notes - Version 5.1.0
The latest version of AnchoreCTL is 5.1.0.
AnchoreCTL 5.1.0 is a feature and bug fix release which includes:
Removes errant ‘status’ string at beginning of anchorectl image check <img> --detail output which caused invalid json.
Updates Syft version to v0.97.1 aligned with Enterprise 5.1.0
AnchoreCTL 5.1.x versions are compatible with Anchore Enterprise 5.1.X deployments.
12.3.23 - AnchoreCTL Release Notes - Version 5.0.1
The latest version of AnchoreCTL is 5.0.1.
AnchoreCTL 5.0.1 is a bug fix release which includes:
A fix for a stack overflow that can be seen when executing the command anchorectl image check <image> --detail. This can occur when the image has an allowlisted policy finding.
AnchoreCTL 5.0.x versions are compatible with Anchore Enterprise 5.0.X deployments.
12.3.24 - AnchoreCTL Release Notes - Version 5.0.0
The latest version of AnchoreCTL is 5.0.0.
NOTE: This version of AnchoreCTL only supports Anchore Enterprise 5.0.x
AnchoreCTL 5.0.0 is a feature and bug fix release which includes:
Dependency updates, and general client updates to support Anchore Enterprise v5.0.0
Change to version scheme, switching to keep version of AnchoreCTL inline with the version of Anchore Enterprise that the client supports (by semver compatibility)
Add sub-command for policy update
Add single java version column to the table output for java content
Remove rbac-url requirement from configuration in support of Anchore Enterprise v5.0.0’s single API feature
Remove the fix_observed_at date from table output for image vulnerability operation
Update the inventory watch commands
Update source policy check output to be more inline with image policy check output
Fix to some cases where the command could hang or terminal could get scrambled
Update to Syft 0.90.0, inline with the version of Syft used in Anchore Enterprise 5.0.0
AnchoreCTL 5.0.x versions are compatible with Anchore Enterprise 5.0.X deployments.
12.3.25 - End-of-Life Releases
12.3.25.1 - AnchoreCTL Release Notes - Version 4.9.0
AnchoreCTL 4.9.0 is a V2 API-compatibility release that is otherwise identical to 1.8.0.
Warning
AnchoreCTL 4.9.0 is compatible Enterprise 4.9.x ONLY and requires the V2 API.
To minimize impact to automated installations, the V2 API compatible AnchoreCTL will not be automatically upgraded using
the install script. See Installation for more information.
AnchoreCTL v4.9.0 uses Syft 0.84.1, the same as AnchoreCTL v1.8.0
AnchoreCTL 4.9.x versions are compatible with Anchore Enterprise 4.9.X deployments.
12.3.25.2 - AnchoreCTL Release Notes - Version 1.8.0
The latest version of AnchoreCTL is 1.8.0.
AnchoreCTL 1.8.0 is a feature and bug fix release which includes:
Adds the ability to create explicit SAML users with user add --idp_name
Adds the ability to list, activate and deactivate runtime inventory watchers with inventory watch
Extends image content command to support the type content_search
Extends image content command to support the type retrieved_files
Extends image content command to support the type secret_search
Adds the ability to specify the image platform to retrieve and analyze when using the --from registry source in the image add command so that local analysis can be done on images of a different architecture than the local host where the analysis occurs.
Add an API version check to prevent accidental use of 1.8.0 against an Anchore V2 API endpoint. See Configuration for more information.
Update to using Syft 0.84.1
12.3.25.3 - AnchoreCTL Release Notes - Version 1.7.0
The latest version of AnchoreCTL is 1.7.0.
AnchoreCTL 1.7.0 is a feature and bug fix release which includes:
Adds more detail from the Anchore Enterprise service for error responses, exposing the server side error detail to the user
Adds new formats (spdx, cycloneDX) to the SBOM output options when using the content get options during image add operations
Add support for new ancestor list command
Add new recommendation field to policy evaluation table output for the image check operation
Changed the policy evaluation level of detail from basic to full detail when fetching policy evaluation during image add operation
Fixed issue where the sbom content was not being fetched when the all type was given to the get option, in the image add operation
Update to using Syft 0.80.0
12.3.25.4 - AnchoreCTL Release Notes - Version 1.6.0
The latest version of AnchoreCTL is 1.6.0.
AnchoreCTL 1.6.0 is a feature and bug fix release which includes:
Adds ability to generate container image SBOMs using a new ‘–from’ option to anchorectl image add. This removes the need to use Syft with anchorectl. AnchoreCTL can now perform all the analysis itself and upload it to your Enterprise deployment. See Using CLI for Images for mor information.
Adds extra analysis locally in addition to the SBOM generation. Filesystem metadata, secret scans, content scans, and file retrieval are now supported as they are when doing analysis of an image inside and Anchore Enterprise deployment
The additional analysis features of secret scans, filesystem metdata, and content searches are only compatible with Anchore Enterprise 4.7+
Fixes the –help output for the ‘completion’ commands to provide correct autocompletion setup guidance
Fixes duplication of vulns shown when no type is specified in anchorectl image vuln <digest> usage
Update to using Syft 0.79.0
12.3.25.5 - AnchoreCTL Release Notes - Version 1.5.0
The latest version of AnchoreCTL is 1.5.0.
AnchoreCTL 1.5.0 is a bug fix release which includes:
Updates a help string for subscription update command to include the runtime_inventory subscription type
Fixes image add <tag> --wait failure with image not found if the same tag is added with another image digest by another client while waiting for the original image to analyze
Update to using Syft 0.75.0
12.3.25.6 - AnchoreCTL Release Notes - Version 1.4.0
The latest version of AnchoreCTL is 1.4.0.
AnchoreCTL 1.4.0 is a feature release which includes:
Adds full output format option support to ‘source sbom’ command similar to ‘image sbom’ operation, including spdx and cyclonedx formats
Adds new command to get a list of vulnerabilities in a specific application version across all artifacts (images and sources)
Adds csv output format for source-repo vulnerability and policy evaluation commands
Fixes adding of incorrect image to application version when using a tag reference in cases where more than one image with that tag is present in the system
Update to using Syft 0.72.1
12.3.25.7 - AnchoreCTL Release Notes - Version 1.3.0
The latest version of AnchoreCTL is 1.3.0.
AnchoreCTL 1.3.0 is a maintenance release which includes:
Added SPDX, CycloneDX and other format options alongside the default JSON format, to the ‘image sbom’ fetch operation
Added CSV format option to ‘image vulnerabilities’ and ‘image check’ operations
Enable ability add container images to Anchore Enterprise by image digest
Add a new ‘CVEs’ column to default table output for ‘image vulnerabilities’ operation for non-CVE findings that refer to one or more CVEs
Update ‘image add’ from SBOM to respect the –no-auto-subscribe flag
Fixes segfault when adding application association to an image that is in analyzing state
Update to using Syft 0.62.3
12.3.25.8 - AnchoreCTL Release Notes - Version 1.2.0
The latest version of AnchoreCTL is 1.2.0.
AnchoreCTL 1.2.0 is a maintenance release which includes:
Support for ‘recommendation’ fields from policy evaluations when used with Enterprise 4.1.1
Fixed to only show a vulnerability once in anchorectl image vuln when not using the -t/--type option
Help and command typo fixes
Updated to using Syft v0.58.0
12.3.25.9 - AnchoreCTL Release Notes - Version 1.1.0
The latest version of AnchoreCTL is 1.1.0.
AnchoreCTL 1.1.0 is a maintenance release which includes:
inventory list command to show all images in the inventory
compatability with Syft v0.56.0
Updated to using Syft v0.56.0
12.3.25.10 - AnchoreCTL Release Notes - Version 1.0.0
The latest version of AnchoreCTL is 1.0.0.
AnchoreCTL 1.0.0 represents the first stable release of the tool as the primary CLI for Anchore Enterprise users. Configuration, command structure and capabilities have all been renovated to support the usage of the client by administrators, users, and within scripting environments for automated integration
The image add and source add commands have been revisited to additionally provide a simple way to extract common data from Anchore Enterprise:
anchorectl image add <my-image> --get vulnerabilities,content : get a summary of content and vulnerabilities to stdout
anchorectl image add <my-image> --get all=/path/to/store/results: get policy evaluation, vuln, and content results, and store all raw JSON files to /path/to/store/results
anchorectl image add <my-image> --get policy-evaluation: will get the policy evaluation results and set the return code to 1 if the policy evaluation is not passing (allowing use as a quality gate)
Added the ability to associate images and sources with an application name and version when adding into the system (e.g. anchorectl image add <my image> --application <name>@<version>).
The UI for all commands has been enhanced to convey intermediate progress and be transparent about actions taken to any result. For instance, using ANCHORECTL_DEBUG_API=true and increasing log levels to “debug” or “trace” (-vv or -vvv) will show individual API events and responses
The anchorectl.yaml application configuration has changed, use anchorectl --help to see the latest configuration schema
Added flag to switch output format for most commands to one of text, json, json-raw, or ID
Updated to using syft v0.52.0
12.3.25.11 - AnchoreCTL Release Notes - Version 0.2.0
The latest version of AnchoreCTL is 0.2.0.
AnchoreCTL is dependent on Syft v0.39.3 as a library.
The current features that are supported are as follows:
Ability to add sboms via anchorectl using stdin to provide an existing SBOM without re-creating it.
12.3.25.12 - AnchoreCTL Release Notes - Version 0.1.4
The latest version of AnchoreCTL is 0.1.4.
AnchoreCTL is dependent on Syft v0.39.3 as a library.
The current features that are supported are as follows:
Source Repository Management: Generate an SBOM and store the SBOM in Anchore’s database. Get information about the source repository, investigate vulnerability packages by requesting vulnerabilities for a single analyzed source repository, or get any policy evaluations.
Download full image SBOMs for images analyzed with Enterprise 4.0.0.
Compliance Reports: View and operate on runtime compliance reports, such as STIGs, created by the rem tool.
Corrections Management: View and modify corrections information to help reduce false positives in your vulnerability results.
Image Management: View, list, import local analysis, and request image analysis by the system.
Runtime Inventory Management: Add, update, and view cluster configurations for Anchore to scan, as well as for the inventory reports themselves.
System Operations: View and manage system information for your Enterprise deployment.
12.4 - Anchore Data Service
Release Notes
12.4.1 - Anchore Data Service Release Notes - Version 0.10.1 (2025-03-05)
12.4.3 - Anchore Data Service Release Notes - Version 0.9.0 (2025-02-05)
Anchore Data Service v0.9.0 - 2025-02-05
The Malware database now uses ClamAV version 1.0.8. This version includes the latest malware signatures and detection capabilities.
Added enhancements to the CISA KEV Database to fix certain inconsistencies in the data.
12.4.4 - Anchore Data Service Release Notes - Version 0.8.0 (2024-11-18)
Anchore Data Service v0.8.0 - 2024-11-18
The vulnerability database has been improved by providing the inferred NVD Fix Version for vulnerabilities when possible.
This data is used to provide information on the version of a package that contains the fix for a vulnerability.
Customers running Enterprise v5.10.0 and greater will automatically see this improved data on their next data sync.
This data is used in Vulnerabilities Policy Gate, Package Trigger with optional parameter of Fix Available.
This dataset can be used to provide a risk score for a vulnerability based on the likelihood that it will be exploited.
The EPSS dataset will be available to all Anchore customers once they upgrade to the future Enterprise v5.12.0 release
which is expected at the end of November 2024.
This data will be used in the Vulnerabilities Policy Gate and Package Trigger with optional parameters:
EPSS Score Comparison
EPSS Score
EPSS Percentile Comparison
EPSS Percentile
12.4.6 - Anchore Data Service Release Notes - Version 0.6.1 (2024-10-23)
Anchore Data Service v0.6.1 - 2024-10-23
Updated Grype DB v0.26.0 which includes the following changes:
Ability to handle symlink paths when found in the upstream vulnerability providers.
12.4.7 - Anchore Data Service Release Notes - Version 0.6.0 (2024-10-18)
Anchore Data Service v0.6.0 - 2024-10-18
Grype DB version has been incremented to 0.25.0. This brings in the following change:
Grype DB now fetches OS type records from the NVD database.
12.4.8 - Anchore Data Service Release Notes - Version 0.5.1 (2024-09-26)
Anchore Data Service v0.5.1 - 2024-09-26
Initial release of Anchore Data Service
Anchore Data Service is a hosted service by Anchore that provides various data to all Enterprise customers. The datasets served include:
Vulnerability Database (grypedb)
ClamAV Malware Database
CISA KEV (Known Exploited Vulnerabilities)
Your Anchore License is all that’s required to authenticate with this service. The data syncer service in your Enterprise installation will automatically sync this data to your installation.
12.5 - Kubernetes Admission Controller
Release Notes
12.5.1 - Kubernetes Admission Controller Release Notes - Version 0.6.3
Kubernetes Admission Controller v0.6.3
Improvements
Various supporting packages have been updated in order to improve security.
12.6.1 - Kubernetes Inventory Release Notes - Version 1.7.6
Kubernetes Inventory v1.7.6
Improvements
Various supporting packages have been updated in order to improve security.
Use the internal version of k8s-inventory when reporting the health status of the k8s-inventory pod to the Enterprise deployment. This
allows a more dynamic update of the k8s-inventory version when reporting status.
12.6.2 - Kubernetes Inventory Release Notes - Version 1.7.5
Kubernetes Inventory v1.7.5
Improvements
Various supporting packages have been updated in order to improve security.
Fixes
Fixes an issue detected when a restart of your k8s-inventory pod was incorrectly creating a new health registry entry within your Enterprise deployment.
12.6.6 - Kubernetes Inventory Release Notes - Version 1.7.1
Kubernetes Inventory v1.7.1
Requirements
Make sure to use k8s-inventory helm chart v0.5.0 when deploying on Kubernetes.
Use Enterprise v5.11.0 for the agent to enable integration health reporting. The
health reporting will otherwise be disabled until Enterprise is upgraded.
Improvements
Adds support for integration registration and health reporting.
12.8.1 - Harbor Scanner Adapter Release Notes - Version 1.4.1
Harbor Scanner Adapter v1.4.1
Fixes
The “Fixed in Version” field for vulnerabilities is no longer empty.
The scanner adapter v1.4.1 now provides the information so that Harbor can display it.
Further details regarding the “Fixed in version” field of vulnerabilities in Harbor and what can be expected from the bug fix in v1.4.1:
When an image is scanned for vulnerabilities, Harbor stores the detected vulnerabilities in a database table. Bindings between the image and its vulnerabilities are stored in another database table.
If another scanned image has some vulnerability that already exists in the database, that image is also bound to that existing vulnerability. Even if the new scan provides some updated information (like fixed in version)about the vulnerability, the vulnerability info in the Harbor database is not updated.
This has the consequence that the “fixed in version” field may still be unpopulated even if harbor-scanner-adapter v1.4.1 provides that value.
Example:
Image A has vulnerabilities X and Y and is scanned in a deployment with harbor-scanner-adapter v1.4.0 (or earlier). Result: Image A’s vulnerabilities X and Y will have an empty “fixed in version” value in Harbor.
The same deployment is later updated to use harbor-scanner-adapter v.1.4.1.
Image A is rescanned. Result: Image A’s vulnerabilities X and Y will still have an empty “fixed in version” value in Harbor. Image B, which has vulnerabilities X and Z, and is next scanned in Harbor. Result: Image B’s vulnerability X will have an empty “fixed in version” value.
Image B’s vulnerability Z will have “fixed in version” populated (if it had a non-empty value).
Anchore Enterprise is designed to run locally. It does not share data with Anchore Inc., or any third parties.
Anchore Enterprise can be configured to download vulnerability and other feed data by using the Anchore Data Service, a hosted endpoint that serves pre-built datasets. This data can be accessed by using the Anchore Air-Gapped Capability for isolated environments with no outside internet connectivity.
No data from your deployment is uploaded to Anchore or third party.