This is the multi-page printable view of this section. Click here to print.
Anchore Secure - Vulnerability Management
1 - Image Analysis
1.1 - Analyzing Images via CTL
Introduction
In this section you will learn how to analyze images with Anchore Enterprise using AnchoreCTL in two different ways:
- Distributed Analysis: Content analysis by AnchoreCTL where it is run and importing the analysis to your Anchore deployment
- Centralized Analysis: The Anchore deployment downloads and analyzes the image content directly
Using AnchoreCTL for Centralized Analysis
Overview
This method of image analysis uses the Enterprise deployment itself to download and analyze the image content. You’ll use AnchoreCTL to make API requests to Anchore to tell it which image to analyze but the Enterprise deployment does the work. You can refer to the Image Analysis Process document in the concepts section to better understand how centralized analysis works in Anchore.
sequenceDiagram participant A as AnchoreCTL participant R as Registry participant E as Anchore Deployment A->>E: Request Image Analysis E->>R: Get Image content R-->>E: Image Content E->>E: Analyze Image Content (Generate SBOM and secret scans etc) and store results E->>E: Scan sbom for vulns and evaluate compliance
Usage
The anchorectl image add
command instructs the Anchore Enterprise deployment to pull (download) and analyze an image from a registry. Anchore Enterprise will attempt to retrieve metadata about the image from the Docker registry and if successful will initiate a pull of the image and queue the image for analysis. The command will output details about the image including the image digest, image ID, and full name of the image.
# anchorectl image add docker.io/library/nginx:latest
anchorectl image add docker.io/library/nginx:latest
✔ Added Image
Image:
status: not-analyzed (active)
tag: docker.io/library/nginx:latest
digest: sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
id: 2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
For an image that has not yet been analyzed, the status will appear as not_analyzed. Once the image has been downloaded it will be queued for analysis. When the analysis begins the status will be updated to analyzing, after which te status will update to analyzed.
Anchore Enterprise can be configured to have a size limit for images being added for analysis. Attempting to add an image that exceeds the configured size will fail, return a 400 API error, and log an error message in the catalog service detailing the failure. This feature is disabled by default so see documentation for additional details on the functionality of this feature and instructions on how to configure the limit.
Using AnchoreCTL for Distributed Analysis
Overview
This way of adding images uses anchorectl to performs analysis of an image outside the Enterprise deployment, so the Enterprise deployment never downloads or touches the image content directly. The generation of the SBOM, secret searches, filesystem metadata, and content searches are all performed by AnchoreCTL on the host where it is run (CI, laptop, runtime node, etc) and the results are imported to the Enterprise deployment where it can be scanned for vulnerabilities and evaluated against policy.
sequenceDiagram participant A as AnchoreCTL participant R as Registry/Docker Daemon participant E as Anchore Deployment A->>R: Get Image content R-->>A: Image Content A->>A: Analyze Image Content (Generate SBOM and secret scans etc) A->>E: Import SBOM, secret search, fs metadata E->>E: Scan sbom for vulns and evaluate compliance
Configuration
Enabling the full set of analyzers, “catalogers” in AnchoreCTL terms, requires updates to the config file used by AnchoreCTL. See Configuring AnchoreCTL for more information on the format and options.
Usage
Note
To locally analyze an image that has been pushed to a registry, it is strongly recommended to use the ‘–from registry’ rather than ‘–from docker’.
This removes the need to have docker installed and also results in a consistent image digest for later use. The registry option gives anchorectl access to data that the docker source does not due to limitations with the Docker Daemon itself and how it handles manifests and image digests.
The anchorectl image add --from [registry|docker]
command will run a local SBOM-generation and analysis (secret scans, filesystem metadata, and content searches) and upload the result to Anchore Enterprise without ever having that image touched or loaded by your Enterprise deployment.
# anchorectl image add docker.io/library/nginx:latest --from registry
anchorectl image add docker.io/library/nginx:latest --from registry -n
✔ Added Image
Image:
status: not-analyzed (active)
tag: docker.io/library/nginx:latest
digest: sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
id: 2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
For an image that has not yet been analyzed, the status will appear as not_analyzed. Once the image has been downloaded it will be queued for analysis. When the analysis begins the status will be updated to analyzing, after which te status will update to analyzed.
The ‘–platform’ option in distributed analysis specifies a different platform than the local hosts’ to use when retrieving the image from the registry for analysis by AnchoreCTL.
# anchorectl image add alpine:latest --from registry --platform linux/arm64
Adding images that you own
For images that you are building yourself, the Dockerfile used to build the image should always be passed to Anchore Enterprise at the time of image addition. This is achieved by adding the image as above, but with the additional option to pass the Dockerfile contents to be stored with the system alongside the image analysis data.
This can be achieved in both analysis modes.
For centralized analysis:
# anchorectl image add myrepo.example.com:5000/app/webapp:latest --dockerfile /path/to/Dockerfile
For distributed analysis:
# anchorectl image add myrepo.example.com:5000/app/webapp:latest --from registry --dockerfile /path/to/Dockerfile
To update an image’s Dockerfile, simply run the same command again with the path to the updated Dockerfile along with ‘–force’ to re-analyze the image with the updated Dockerfile. Note that running add
without --force
(see below) will not re-add an image if it already exists.
Providing Dockerfile content is supported in both push and pull modes for adding images.
Additional Options
When adding an image, there are some additional (optional) parameters that can be used. We show some examples below and all apply to both distributed and centralize analysis workflows.
# anchorectl image add docker.io/library/alpine:latest --force
✔ Added Image docker.io/library/alpine:latest
Image:
status: not-analyzed (active)
tags: docker.io/alpine:3
docker.io/alpine:latest
docker.io/dnurmi/testrepo:test0
docker.io/library/alpine:latest
digest: sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870
id: 9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5
distro: [email protected] (amd64)
layers: 1
the --force
option can be used to reset the image analysis status of any image to not_analyzed, which is the base analysis state for an image. This option shouldn’t be necessary to use in normal circumstances, but can be useful if image re-analysis is needed for any reason desired.
# anchorectl image add myrepo.example.com:5000/app/webapp:latest --dockerfile /path/to/dockerfile --annotation owner=someperson --annotation [email protected]
the --annotation
parameter can be used to specify ‘key=value’ pairs to associate with the image at the time of image addition. These annotations will then be carried along with the tag, and will appear in image records when fetched, and in webhook notification payloads that contain image information when they are sent from the system. To change an annotation, simply run the add command again with the updated annotation and the old annotation will be overriden.
# anchorectl image add alpine:latest --no-auto-subscribe
the ‘–no-auto-subscribe’ flag can be used if you do not wish for the system to automatically subscribe the input tag to the ’tag_update’ subscription, which controls whether or not the system will automatically watch the added tag for image content updates and pull in the latest content for centralized analysis. See Subscriptions for more information about using subscriptions and notifications in Anchore.
These options are supported in both distributed and centralized analysis.
Image Tags
In this example, we’re adding docker.io/mysql:latest
, if we attempt to add a tag that mapped to the same image, for example docker.io/mysql:8
Anchore Enterprise will detect the duplicate image identifiers and return a detail of all tags matching that image.
Image:
status: analyzed (active)
tags: docker.io/mysql:8
docker.io/mysql:latest
digest: sha256:8191525e9110aa32b436a1ec772b76b9934c1618330cdb566ca9c4b2f01b8e18
id: 4390e645317399cc7bcb50a5deca932a77a509d1854ac194d80ed5182a6b5096
distro: [email protected] (amd64)
layers: 11
Deleting An Image
The following command instructs Anchore Enterprise to delete the image analysis from the working set using a tag. The --force
option must be used if there is only one digest associated with the provided tag, or any active subscriptions are enabled against the referenced tag.
# anchorectl image delete mysql:latest --force
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:8191525e9110aa32b436a1ec772b76b9934c1618330cdb566ca9c4b2f01b8e18 │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
To delete a specific image record, the digest can be supplied instead to ensure it is the exact image record you want:
# anchorectl image delete sha256:899a03e9816e5283edba63d71ea528cd83576b28a7586cf617ce78af5526f209
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:899a03e9816e5283edba63d71ea528cd83576b28a7586cf617ce78af5526f209 │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
Deactivate Tag Subscriptions
Check if the tag has any active subscriptions.
# anchorectl subscription list
anchorectl subscription list
✔ Fetched subscriptions
┌──────────────────────────────────────────────────────────────────────┬─────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────────────────────────────────────────────────┼─────────────────┼────────┤
│ docker.io/alpine:latest │ policy_eval │ false │
│ docker.io/alpine:3.12.4 │ policy_eval │ false │
│ docker.io/alpine:latest │ vuln_update │ false │
│ docker.io/redis:latest │ policy_eval │ false │
│ docker.io/centos:8 │ policy_eval │ false │
...
...
If the tag has an active subscription(s), then can disabled (deactivated) in order to permit deletion:
# anchorectl subscription deactivate docker.io/alpine:3.12.6 tag_update
✔ Deactivate subscription
Key: docker.io/alpine:3.12.6
Type: tag_update
Id: a6c7559deb7d5e20621d4a36010c11b0
Active: false
Advanced
Anchore Enterprise also allows adding images directly by digest / tag / timestamp tuple, which can be useful to add images that are still available in a registry but not associated with a current tag any longer.
To add a specific image by digest with the tag it should be associated with:
anchorectl image add docker.io/nginx:stable@sha256:f586d972a825ad6777a26af5dd7fc4f753c9c9f4962599e6c65c1230a09513a8
Note: this will submit the specific image by digest with the associated tag, but Anchore will treat that digest as the most recent digest for the tag, so if the image registry actually has a different history (e.g. a newer image has been pushed to that tag), then the tag history in Anchore may not accurately reflect the history in the registry.
Next Steps
Next, let’s find out how to Inspect Image Content
1.1.1 - Inspecting Image Content
Introduction
During the analysis of container images, Anchore Enterprise performs deep inspection, collecting data on all artifacts in the image including files, operating system packages and software artifacts such as Ruby GEMs and Node.JS NPM modules.
Inspecting images
The image content
command can be used to return detailed information about the content of the container image.
# anchorectl image content INPUT_IMAGE -t CONTENT_TYPE
The INPUT_IMAGE can be specified in one of the following formats:
- Image Digest
- Image ID
- registry/repo:tag
the CONTENT_TYPE can be one of the following types:
- os: Operating System Packages
- files: All files in the image
- go: GoLang modules
- npm: Node.JS NPM Modules
- gem: Ruby GEMs
- java: Java Archives
- python: Python Artifacts
- nuget: .NET NuGet Packages
- binary: Language runtime locations and version (e.g. openjdk, python, node)
- malware: ClamAV mailware scan results, if enabled
You can always get the latest available content types using the ‘-a’ flag:
# anchorectl image content library/nginx:latest -a
✔ Fetched content [fetching available types] library/nginx:latest
binary
files
gem
go
java
malware
npm
nuget
os
python
For example:
# anchorectl image content library/nginx:latest -t files
✔ Fetched content [0 packages] [6099 files] library/nginx:latest
Files:
┌────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┬───────┬─────┬─────┬───────┬───────────────┬──────────────────────────────────────────────────────────────────┐
│ FILE │ LINK │ MODE │ UID │ GID │ TYPE │ SIZE │ SHA256 DIGEST │
├────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┼───────┼─────┼─────┼───────┼───────────────┼──────────────────────────────────────────────────────────────────┤
│ /bin │ │ 00755 │ 0 │ 0 │ dir │ 0 │ │
│ /bin/bash │ │ 00755 │ 0 │ 0 │ file │ 1.234376e+06 │ d86b21405852d8642ca41afae9dcf0f532e2d67973b0648b0af7c26933f1becb │
│ /bin/cat │ │ 00755 │ 0 │ 0 │ file │ 43936 │ e9165e34728e37ee65bf80a2f64cd922adeba2c9f5bef88132e1fc3fd891712b │
│ /bin/chgrp │ │ 00755 │ 0 │ 0 │ file │ 72672 │ f47bc94792c95ce7a4d95dcb8d8111d74ad3c6fc95417fae605552e8cf38772c │
│ /bin/chmod │ │ 00755 │ 0 │ 0 │ file │ 64448 │ b6365e442b815fc60e2bc63681121c45341a7ca0f540840193ddabaefef290df │
│ /bin/chown │ │ 00755 │ 0 │ 0 │ file │ 72672 │ 4c1443e2a61a953804a462801021e8b8c6314138371963e2959209dda486c46e │
...
AnchoreCTL will output a subset of fields from the content view, for example for files
on the file name and size are displayed. To retrieve the full output the --json
parameter should be passed.
For example:
# anchorectl -o json image content library/nginx:latest -t files
✔ Fetched content [0 packages] [6099 files] library/nginx:latest
{
"files": [
{
"filename": "/bin",
"gid": 0,
"linkdest": null,
"mode": "00755",
"sha256": null,
"size": 0,
"type": "dir",
"uid": 0
},
...
Next Steps
- View security vulnerabilities in the image
- Subscribe to receive notifications when the image is updated, when the policy status changes, or when new vulnerabilities are detected.
- Scan Respositories
1.1.2 - Viewing Security Vulnerabilities
Introduction
The image vulnerabilities
command can be used to return a list of vulnerabilities found in the container image.
# anchorectl image vulnerabilities INPUT_IMAGE -t VULN_TYPE
The INPUT_IMAGE
can be specified in one of the following formats:
- Image Digest
- Image ID
- registry/repo:tag
The VULN_TYPE
currently supports:
- os: Vulnerabilities against operating system packages (RPM, DPKG, APK, etc.)
- non-os: Vulnerabilities against language packages (NPM, GEM, Java Archive (jar, war, ear), Python PIP, .NET NuGet, etc.)
- all: Combination report containing both ‘os’ and ’non-os’ vulnerability records.
The system has been designed to incorporate 3rd party feeds for other vulnerabilites.
Examples
To generate a report of OS package (RPM/DEB/APK) vulnerabilities found in the image including CVE identifier, Vulnerable Package, Severity Level, Vulnerability details and version of fixed package (if available).
# anchorectl image vulnerabilities debian:latest -t os
Currently the following the system draws vulnerability data specifically matched to the following OS distros:
- Alpine
- CentOS
- Debian
- Oracle Linux
- Red Hat Enterprise Linux
- Red Hat Universal Base Image (UBI)
- Ubuntu
- Suse Linux
- Amazon Linux 2
- Google Distroless
To generate a report of language package (NPM/GEM/Java/Python) vulnerabilities, the system draws vulnerability data from the NVD data feed, and vulnerability reports can be viewed using the ’non-os’ vulnerability type:
# anchorectl image vulnerabilities node:latest -t non-os
To generate a list of all vulnerabilities that can be found, regardless of whether they are against an OS or non-OS package type, the ‘all’ vulnerability type can be used:
# anchorectl image vulnerabilities node:latest -t all
Finally, for any of the above queries, these commands (and other anchorectl commands) can be passed the -o json
flag to output the data in JSON format:
# anchorectl -o json image vulnerabilities node:latest -t all
Other options can be reviewed by issuing anchorectl image vulnerabilities --help
at any time.
Next Steps
- Subscribe to receive notifications when the image is updated, when the policy status changes or when new vulnerabilities are detected.
1.2 - Image Analysis via UI
Overview
In this section you will learn how to submit images for analysis using the user interface, and how to execute a bulk removal of pending items or previously-analyzed items from within a repository group.
Note: Only administrators and standard users with the requisite role-based access control permissions are allowed to submit items for analysis, or remove previously analyzed assets.
Getting Started
From within an authenticated session, click the Image Analysis button on the navigation bar:
You will be presented with the Image Analysis view. On the right-hand side of this view you will see the Analyze Repository and Analyze Tag buttons:
These controls allow you to add entire repositories or individual items to the Anchore analysis queue, and to also provide details about how you would like the analysis of these submissions to be handled on an ongoing basis. Both options are described below in the following sections.
Analyze a Repository
After clicking the Analyze Repository button, you are presented with the following dialog:
The following fields are required:
- Registry—for example:
docker.io
- Repository—for example:
library/centos
Provided below these fields is the Watch Tags in Repository configuration toggle. By default, when One-Time Tag Analysis is selected all tags currently present in the repository will be analyzed; once initial analysis is complete the repository will not be watched for future additions.
Setting the toggle to Automatically Check for Updates to Tags specifies that the repository will be monitored for any new tag additions that take place after the initial analysis is complete. Note that you are also able to set this option for any submitted repository from within the Image Analysis view.
Once you have populated the required fields and click OK, you will be notified of the overhead of submitting this repository by way of a count that shows the maximum number of tags detected within that repository that will be analyzed:
You can either click Cancel to abandon the repository analysis request at this point, or click OK to proceed, whereupon the specified repository will be flagged for analysis.
Max image size configuration applies to repositories added via UI. See max image size
Analyze a Tag
After clicking the Analyze Tag button, you are presented with the following dialog:
The following fields are required:
- Registry—for example,
docker.io
- Repository—for example,
library/centos
- Tag—for example,
latest
Note: Depending upon where the dialog was invoked, the above fields may be pre-populated. For example, if you clicked the Analyze Tag button while looking at a view under Image Analysis that describes a previously-analyzed repository, the name of that repository and its associated registry will be displayed in those fields.
Some additional options are provided on the right-hand side of the dialog:
Watch Tag—enabling this toggle specifies that the tag should be monitored for image updates on an ongoing basis after the initial analysis
Force Reanalysis—if the specified tag has already been analyzed, you can force re-analysis by enabling this option. You may want to force re-analysis if you decide to add annotations (see below) after the initial analysis. This option is ignored if the tag has not yet been analyzed.
Add Annotation—annotations are optional key-pair values that can be added to the image metadata. They are visible within the Overview tab of the Image Analysis view once the image has been analyzed, as well as from within the payload of any webhook notification from Anchore that contains image information.
Once you have populated the required fields and click OK, the specified tag will be scheduled for analysis.
Max image size configuration applies to images added via UI. See max image size
Note: Anchore will attempt to download images from any registry without requiring further configuration. However, if your registry needs authentication then the corresponding credentials will need to be defined. See Configuring Registries for more information.
Repository Deletion
Shown below is an example of a repository view under Image Analysis:
From a repository view you can carry out actions relating to the bulk removal of items in that repository. The Analysis Cancellation / Repository Removal control is provided in this view, adjacent to the analysis controls:
After clicking this button you are presented with the following options:
Cancel Images Currently Pending Analysis—this option is only enabled if you have one or more tags in the repository view that are currently scheduled for analysis. When invoked, all pending items will be removed from the queue. This option is particularly useful if you have selected a repository for analysis that contains many tags, and the overall analysis operation is taking longer than initially expected.
Note: If there is at least one item present in the repository that is not pending analysis, you will be offered the opportunity to decide if you want the repository to be watched after this operation is complete.
Remove Repository and Analyzed Items—In order to remove a repository from the repository view in its entirety, all items currently present within the repository must first be removed from Anchore. When invoked, all items (in any state of analysis) will be removed. If the repository is being watched, this subscription is also removed.
2 - Scanning Repositories
Introduction
Individual images can be added to Anchore Enterprise using the image add
command. This may be performed by a CI/CD plugin such as Jenkins or manually by a user with the UI, AnchoreCTL or API.
Anchore Enterprise can also be configured to scan repositories and automatically add any tags found in the repository. This is referred to as a Repository Subscription. Once added, Anchore Enterprise will periodically check the repository for new tags and add them to Anchore Enterprise. For more details on the Repository Subscription, please see Subscriptions
Note When you add a registry to Anchore, no images are pulled automatically. This is to prevent your Anchore deployment from being overwhelmed by a very large number of images. Therefore, you should think of adding a registry as a preparatory step that allows you to then add specific repositories or tags without having to provide the access credentials for each. Because a repository typically includes a manageable number of images, when you add a repository to Anchore images, all tags in that repository are automatically pulled and analyzed by Anchore. For more information about managing registries, see Managing Registries.
Adding Repositories
The repo add
command instructs Anchore Enterprise to add the specified repository watch list.
# anchorectl repo add docker.io/alpine
✔ Added repo
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
└──────────────────┴─────────────┴────────┘
Once added, Anchore Enterprise will identify the list of tags within the repository and add them to the catalog to be analyzed.
There is an option to exclude existing tags from being added to the system. This is useful when you want to watch for
and add only new tags to the system without adding tags that are already present. To do this, use the --exclude-existing-tags
option.
Also by default Anchore Enterprise will automatically add the discovered tags to the list of subscribed tags
( see Working with Subscriptions ). However, this
behavior can be overridden by passing the --auto-subscribe=<true|false>
option.
Listing Repositories
The repo list
command will show the repositories monitored by Anchore Enterprise.
# anchorectl repo list
✔ Fetched repos
┌─────────────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├─────────────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
│ docker.io/elasticsearch │ repo_update │ true │
└─────────────────────────┴─────────────┴────────┘
Deleting Repositories
The del
option can be used to instruct Anchore Enterprise to remove the repository from the watch list. Once the repository record has been deleted no further changes to the repository will be detected by Anchore Enterprise.
Note: No existing image data will be removed from Anchore Enterprise.
# anchorectl repo del docker.io/alpine
✔ Deleted repo
No results
Unwatching Repositories
When a repository is added, Anchore Enterprise will monitor the repository for new and updated tags. This behavior can be disabled preventing Anchore Enterprise from monitoring the repository for changes.
In this case the repo list
command will show false in the Watched column for this registry.
# anchorectl repo unwatch docker.io/alpine
✔ Unwatch repo
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ false │
└──────────────────┴─────────────┴────────┘
Watching Repositories
The repo watch command instructs Anchore Enterprise to monitor a repository for new and updated tags. By default repositories added to Anchore Enterprise are automatically watched. This option is only required if a repository has been manually unwatched.
# anchorectl repo watch docker.io/alpine
✔ Watch repo
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
└──────────────────┴─────────────┴────────┘
As of v3.0, Anchore Enterprise can be configured to have a size limit for images being added for analysis. This feature applies to the repo watcher. Images that exceed the max configured size in the repo being watched will not be added and a message will be logged in the catalog service. This feature is disabled by default so see documentation for additional details on the functionality of this feature and instructions on how to configure the limit
Removing a Repository and All Images
There may be a time when you wish to stop a repository analysis when the analysis is running (e.g., accidentally watching an image with a large number of tags). There are several steps in the process which are outlined below. We will use docker.io/library/alpine
as an example.
Note: Be careful when deleting images. In this flow, Anchore deletes the image, not just the repository/tag combo. Because of this, deletes may impact more than the expected repository since an image may have tags in multiple repositories or even registries.
Check the State
Take a look at the repository list.
anchorectl repo list
✔ Fetched repos
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
└──────────────────┴─────────────┴────────┘
Also look at the image list.
anchorectl image list | grep docker.io/alpine
✔ Fetched images
│ docker.io/alpine:20220328 │ sha256:c11c38f8002da63722adb5111241f5e3c2bfe4e54c0e8f0fb7b5be15c2ddca5f │ not_analyzed │ active │
│ docker.io/alpine:3.16.0 │ sha256:4ff3ca91275773af45cb4b0834e12b7eb47d1c18f770a0b151381cd227f4c253 │ not_analyzed │ active │
│ docker.io/alpine:20220316 │ sha256:57031e1a3b381fba5a09d5c338f7dbeeed2260ad5100c66b2192ab521ae27fc1 │ not_analyzed │ active │
│ docker.io/alpine:3.14.5 │ sha256:aee6c86e12b609732a30526ddfa8194e4a54dc5514c463e4c2e41f5a89a0b67a │ not_analyzed │ active │
│ docker.io/alpine:3.15.5 │ sha256:26284c09912acfc5497b462c5da8a2cd14e01b4f3ffa876596f5289dd8eab7f2 │ not_analyzed │ active │
...
...
Removing the Repository from the Watched List
Unwatch docker.io/library/alpine
to prevent future automatic updates.
# anchorectl repo unwatch docker.io/alpine
✔ Unwatch repo
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ false │
└──────────────────┴─────────────┴────────┘
Delete the Repository
Delete the repository. This may need to be done a couple times if the repository still shows in the repository list.
# anchorectl repo delete docker.io/alpine
✔ Deleted repo
No results
Forcefully Delete the Images
Delete the analysis/images. This may need to be done several times to remove all images depending on how many there are.
# for i in `anchorectl -q image list | grep docker.io/alpine | awk '{print $2}'`
> do
> anchorectl image delete ${i} --force
> done
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:c11c38f8002da63722adb5111241f5e3c2bfe4e54c0e8f0fb7b5be15c2ddca5f │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:4ff3ca91275773af45cb4b0834e12b7eb47d1c18f770a0b151381cd227f4c253 │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
...
...
...
Verify the Repository and All Images are Deleted
Check the repository list.
# anchorectl repo list
✔ Fetched repos
┌─────┬──────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├─────┼──────┼────────┤
└─────┴──────┴────────┘
Check the image list.
# anchorectl image list | grep docker.io/alpine
✔ Fetched images
<no output>
Next Steps
- View security vulnerabilities in the image
- Subscribe to receive notifications when the image is updated, when the policy status changes, or when new vulnerabilites are detected.
3 - Kubernetes Inventory
Anchore Enterprise allows you to navigate through your Kubernetes clusters to quickly and easy asses your vulnerabilities, apply policies, and take action on them. You’ll need to configure your clusters for collection before being able to take advantage of these features. See our installation instructions to get setup.
Watching Clusters and Namespaces
Users can opt to automatically scan all the images that are deployed to a specific cluster or namespace. This is helpful to monitor your overall security posture in your runtime and enforce policies. Before opting to subscribe to a new cluster, it’s important to ensure you have proper credentials saved in Anchore to pull the images from the registry. Also watching a new cluster can create a considerable queue of images to work through and impact other users of your Anchore Enterprise deployment.
Using Charts Filters
The charts at the top of the UI provide key contextual information about your runtime. Upon landing on the page you’ll see a summary of your policy evaluations and vulnerabilities for all your clusters. Drilling down into a cluster or namespace will update these charts to represent the data for the selected cluster and/or namespace. Additionally, users can select to only view clusters or namespaces with the selected filters. For example selecting only high and critical vulnerabilities will only show the clusters and/or namespaces that have those vulnerabilities.
Using Views
In addition to navigating your runtime inventory by clusters and namespaces, users can opt to view the images or vulnerabilities across. This is a great way to identify vulnerabilities across your runtime and asses their impact.
Assessing impact
Another important aspect of the Kubernetes Inventory UI is the ability to assess how a vulnerability in a container images impacts your environment. For every container when you see a note about it usage being seen in particular cluster and X more… you will be able to mouse over the link for a detailed list of where else that container image is being used. This is fast way to determine the “blast-radius” of a vulnerability.
Data Delays
Due to the processing required to generate the data used by the Kubernetes Inventory UI, the results displayed may not be fully up to date. The overall delay depends on the configuration of how often inventory data is collected, and how frequently your reporting data is refreshed. This is similar to delays present on the dashboard.
Policy and Account Considerations
The Kubernetes Inventory is only available for an account’s default policy. You may want to consider setting up an account specifically for tracking your Kubernetes Inventory and enforcing a policy.
4 - Results
On occasion, you may see a vulnerability identified by GHSA (GitHub Security Advisory) instead of CVE (Common Vulnerability Enumeration). The reason for this is that Anchore uses an order of precedence to match vulnerabilities from feeds. Anchore gives precedence to OS and third-party package feeds which often contain more up-to-date information and provide more accurate matches with image content. However, these feeds may provide GHSA vulnerability IDs instead of CVEs as provided by NVD (National Vulnerability Database) feeds.
The vulnerability ID Anchore reports depends on how the vulnerability is matched. The order of precedence is packages installed by OS package managers, then third-party packages (java, python, node), and then NVD. The GHSA feeds tend to be ahead of the NVD feeds, so there may be some vulnerabilities that match a GHSA before they match a CVE from NVD.
We are working to unify the presentation of vulnerability IDs to keep things more consistent. Currently our default is to report the CVE unless the GHSA provides a more accurate match.
5 - Working with Subscriptions
Introduction
Anchore Enterprise supports 7 types of subscriptions.
- Tag Update
- Policy Update
- Vulnerability Update
- Analysis Update
- Alerts
- Repository Update
- Runtime Inventory
For detail information about Subscriptions please see Subscriptions
Managing Subscriptions
Subscriptions can be managed using AnchoreCTL.
Listing Subscriptions
Running the subscription list
command will output a table showing the type and status of each subscription.
# anchorectl subscription list | more
✔ Fetched subscriptions
┌──────────────────────────────────────────────────────────────────────┬─────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────────────────────────────────────────────────┼─────────────────┼────────┤
│ docker.io/alpine:latest │ policy_eval │ false │
│ docker.io/alpine:3.12.4 │ policy_eval │ false │
│ docker.io/alpine:latest │ vuln_update │ false │
│ docker.io/redis:latest │ policy_eval │ false │
│ docker.io/centos:8 │ policy_eval │ false │
│ docker.io/alpine:3.8.4 │ policy_eval │ false │
│ docker.io/centos:8 │ vuln_update │ false │
...
└──────────────────────────────────────────────────────────────────────┴─────────────────┴────────┘
Note: Tag Subscriptions are tied to registry/repo:tag and not to image IDs.
Activating Subscriptions
The subscription activate
command is used to enable a subscription type for a given image. The command takes the following form:
anchorectl subscription activate SUBSCRIPTION_KEY SUBSCRIPTION_TYPE
SUBSCRIPTION_TYPE should be either:
- tag_update
- vuln_update
- policy_eval
- analysis_update
SUBSCRIPTION_KEY should be the name of the subscribed tag. eg. docker.io/ubuntu:latest
For example:
# anchorectl subscription activate docker.io/ubuntu:latest tag_update
✔ Activate subscription
Key: docker.io/ubuntu:latest
Type: tag_update
Id: 04f0e6d230d3e297acdc91ed9944278d
Active: true
and to de-activate:
# anchorectl subscription deactivate docker.io/ubuntu:latest tag_update
✔ Deactivate subscription
Key: docker.io/ubuntu:latest
Type: tag_update
Id: 04f0e6d230d3e297acdc91ed9944278d
Active: false
Tag Update Subscription
Any new tag added to Anchore Enterprise by AnchoreCTL will, by default, enable the Tag Update Subscription.
If you do to need this functionality, you can use the flag --no-auto-subscribe
or set the environment variable ANCHORECTL_IMAGE_NO_AUTO_SUBSCRIBE
when adding new tags.
# ./anchorectl image add docker.io/ubuntu:latest --no-auto-subscribe
Runtime Inventory Subscription
AnchoreCTL provides commands to help navigate the runtime_inventory
Subscription. The subscription will monitor a specify runtime inventory context and add its images to the system for analysis.
Listing Inventory Watchers
# ./anchorectl inventory watch list
✔ Fetched watches
┌──────────────────────────┬───────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────┼───────────────────┼────────┤
│ cluster-one/my-namespace │ runtime_inventory │ false │
└──────────────────────────┴───────────────────┴────────┘
Activating an Inventory Watcher
Note: This command will create the subscription is one does not already exist.
# ./anchorectl inventory watch activate cluster-one/my-namespace
✔ Activate watch
┌──────────────────────────┬───────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────┼───────────────────┼────────┤
│ cluster-one/my-namespace │ runtime_inventory │ true │
└──────────────────────────┴───────────────────┴────────┘
Deactivating an Inventory Watcher
# ./anchorectl inventory watch deactivate cluster-one/my-namespace
✔ Deactivate watch
┌──────────────────────────┬───────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────┼───────────────────┼────────┤
│ cluster-one/my-namespace │ runtime_inventory │ false │
└──────────────────────────┴───────────────────┴────────┘
Webhook Configuration
Webhooks are configured in the Anchore Enterprise configuration file config.yaml
In the sample configuration file webhooks are disabled (commented) out.
webhooks:
webhook_user: 'user'
webhook_pass: 'pass'
ssl_verify: False
The webhooks can, optionally, pass basic credentials to the webhook endpoint, if these are not required the the webhook_user
and webhool_pass
entries can be commented out. By default TLS/SSL connections will validate the certificate provided. This can be suppressed by uncommenting the ssl_verify
option.
url: 'http://localhost:9090/general/<notification_type>/<userId>'
If configured, the general webook will receive all notifications (policy_eval, tag_update, vuln_update) for each user.In this case <notification_type> will be replaced by the appropriate type. will be replaced by the configured user which is, by default, admin. eg. http://localhost:9090/general/vuln_update/admin'
policy_eval:
url: 'http://localhost:9090/somepath/<userId>'
webhook_user: 'mehuser'
webhook_pass: 'mehpass'
Specific endpoints for each event type can be configured, for example an endpoint for policy_eval notifications. In these cases the url, username, password and SSL/TLS verification can be specified.
error_event:
url: 'http://localhost:9090/error_event/'
This webook, if configured, will send a webhook if any FATAL system events are logged.
6 - Reports
Overview
The Reports tab is your gateway to producing insights into the collective status of your container image environment based on the back-end Enterprise Reporting Service.
Note: Because the reporting data cycle is configurable, the results shown in this view may not precisely reflect actual analysis output at any given time.
For more information on how to modify this cycle or the Reporting Service in general, please refer to the Reporting Service documentation.
Report View
The Report feature provides the tools to create custom reports, set a report to run on a schedule (or store the report for future use), and get notified when they’re executed in order to receive the insights you’re interested in for account-wide artifacts.
In addition, you can create user templates (also known as custom templates) that use any of the preconfigured system templates offered with the application as their basis, or create your own templates from scratch. Templates provide the structure and filter definitions the application uses in order to generate reports.
To jump to a particular guide, select from the following below:
6.1 - New Reports
Overview
The New Reports tab in the Reports view is where you can create a new report, either on an ad-hoc basis for immediate download, or for it to be saved for future use. Saved reports can be executed immediately, scheduled, or both.
Note: The New Reports tab will be the default tab selected in the Reports view when you don’t yet have any saved reports.
Reports created in this view are based on templates. Templates provide the output structure and filter definitions the user can configure in order for the application to generate the shape of the report. Anchore Enterprise client provides immediate access to a number of preconfigured system templates that can be used as the basis for user templates. For more information on how to create and manage templates, please refer to the Templates documentation.
Creating a Report
The initial view of the New Reports tab is shown below:
In the above view you can see that the application is inviting you to select a template from the dropdown menu. You can either select an item from this dropdown or click in the field itself and enter text in order to filter the list.
Once a template is selected, the view will change to show the available filters for the selected template. The following screenshot shows the view after selecting the Artifacts by Vulnerability
template:
At this point you can click Preview Report to see the summary output and download the information, or you can refine the report by adding filters from the associated dropdown. As with the template selection, you can either select an item from the dropdown or click in the field itself and enter text in order to filter the list.
After you click the Preview Report button, you are presented with the summary output and the ability to download the report in a variety of formats:
At this point you can click any of the filters you applied in order to adjust them (or remove them entirely). The results will update automatically. If you want to add more filters you can click the [ Edit ] button and select more items from the available options and then click Preview Report again to see the updated results.
You can now optionally configure the output information by clicking the [ Configure Columns ] button. The resulting popup allows you to reorder and rename the columns, as well as remove columns you don’t want to see in the output or add columns that are not present by default:
Once you’re satisfied with the output, click Download Full Report to download the report in the selected format. The formats provided are:
- CSV - comma-separated values, with all nested objects flattened into a linear list of items
- Flat JSON - JavaScript object notation, with all nested objects flattened into a linear list of items
- Raw JSON - JavaScript object notation, with all nested objects preserved
Saving a Report
The above describes the generation of an ad-hoc report for download, which may be all you need. However, you can also save the report for future use. To do so, click the Save Report button. The following popup will appear:
Provide a name and optional description for the report, and then select whether you want to save the report and store results immediately, set it to run on a schedule, or both. If you select the Generate Report option, you can then select the frequency of the report generation. Once you’re satisfied with the configuration, click Save.
The saved report will be stored under Saved Reports and you will immediately be transitioned to this view on success. The features within this view are described in the Saved Reports section.
6.2 - Quick Report
Overview
Generate a report utilizing the back-end Enterprise Reporting Service through a variety of formats - table, JSON, and CSV. If you’re interested in refining your results, we recommend using the plethora of optional filters provided.
Note: Because the reporting data cycle is configurable, the results shown in this view may not precisely reflect actual analysis output at any given time.
For more information on how to modify this cycle or the Reporting Service in general, please refer to the Reporting Service documentation.
The following sections in this document describe how to select a query, add optional filters, and generate a report.
Reports
Selecting a Query
To select a query, click the available dropdown present in the view and select the type of report you’re interested in generating.
Images Affected by Vulnerability
View a list of images and their various artifacts that are affected by a vulnerability. By default, a couple optional filters are provided:
Filter | Description |
---|---|
Vulnerability Id | Vulnerability ID |
Tag Current Only | If set to true, current tag mappings are evaluated. Otherwise, all historic tag mappings are evaluated |
Policy Compliance History by Tag
Query your policy evaluation data using this report type. By default, this report was crafted with compliance history in mind. Quite a few optional filters are provided to include historic tag mappings and historic policy evaluations from any policy that is or was set to active. More info below:
Filter | Description |
---|---|
Registry Name | Name of the registry |
Repository Name | Name of the repository |
Tag Name | Name of the tag |
Tag Current Only | If set to true, current tag mappings are evaluated. Otherwise, all historic tag mappings are evaluated |
Policy Evaluation Latest Only | If set to true, only the most recent policy evaluation is processed. Otherwise, all historic policy evaluations are evaluated |
Policy Active | If set to true, only the active policy at the time of this query is used. Otherwise, all historically active policies are also included. This attribute is ignored if a policy ID or digest is specified in the filter |
Note that the default filters provided are optional.
Adding Optional Filters
Once a report type has been selected, an Optional Filters dropdown becomes available with items specific to that Query. Such as those listed above, any filters considered default to that report type are also shown.
You can remove any filters you don’t need by pressing the in their top right corner but as long as they’re empty/unset, they will be ignored at the time of report generation.
Generating a Report
After a report type has been selected, you immediately can Generate Report by clicking the button shown in the bottom left of the view.
By default, the Table format is selected but you can click the dropdown and modify the format for your report by selecting either JSON or CSV.
Table
A fast and easy way to browse your data, the table report retrieves paginated results and provides optional sorting by clicking on any column header. Each column is also resizable for your convenience. You can choose to fetch more or fetch all items although please note that depending on the size of your data, fetching all items may take a while.
Download Options
Download your report in JSON or CSV format. Various metadata such as the report type, any filters used when querying, and the timestamp of the report are included with your results. Please note that depending on the size of your data, the download may take a while.
6.3 - Report Manager
Overview
Use the Report Manager view to create custom queries, set a report to run on a schedule (or store the configuration for future use), and get notified when they’re executed in order to receive the insights you’re interested in for account-wide artifacts. The results are provided through a variety of formats - tabular, JSON, or CSV - and rely on data retrieved from the back-end Enterprise Reporting Service.
Note: Because the reporting data cycle is configurable, the results shown in this view may not precisely reflect actual analysis output at any given time.
For more information on how to modify this cycle or the Reporting Service in general, please refer to the Reporting Service documentation.
The following sections in this document describe templates, queries, scheduling reports, and viewing your results.
Report Manager
Templates
Templates define the filters and table field columns used by queries to generate report output. The templates provided by the sytem or stored by other users in your account can be used directly to create a new query or as the basis for crafting new templates.
System Templates
By default, the UI provides a set of system templates:
- Images Failing Policy Evaluation
- This template contains a customized set of filters and fields, and is based on “Policy Compliance History by Tag”.
- Images With Critical Vulnerabilities
- This template contains a customized set of filters and fields, and is based on “Images Affected by Vulnerability”.
- Artifacts by Vulnerability
- This templates contains all filters and fields by default.
- Tags by Vulnerability
- This templates contains all filters and fields by default.
- Images Affected by Vulnerability
- This templates contains all filters and fields by default.
- Policy Compliance History by Tag
- This templates contains all filters and fields by default.
- Vulnerabilities by Kubernetes Namespace
- This templates contains all filters and fields by default.
- Vulnerabilities by Kubernetes Container
- This templates contains all filters and fields by default.
- Vulnerabilities by ECS Container
- This templates contains all filters and fields by default.
Creating a Template
In order to define a template’s list of fields and filters, navigate to the Create a New Template section of the page, select a base configuration provided by the various System Templates listed above, and click Next to open a modal.
Provide a name for your new template, add an optional description, and modify any fields or filters to your liking.
The fields you choose control what data is shown in your results and are displayed from left to right within a report table. To optionally refine the result set returned, you can add or remove filter options, set a default value for each entry and specify if the filter is optional or required.
Note that templates must contain at least one field and one filter.
Once the template is configured to your satisfaction, click OK to save it as a Stored Template. Your new template is now available to hydrate a query or as a basis for future templates.
Editing a Template
To view or edit a template that has been stored previously, click its name under Stored Report Items on the right of the page. As with the creation of a template, the list of fields and filters can be customized to your preference.
When you’re done, click OK to save any new changes or Cancel to discard them.
Deleting a Template
To delete a template that you have configured previously, click the red “x” to the left of its name under Stored Report Items and click Yes to remove it. Note that once the template has been removed, you won’t be able to recover it.
Queries
Queries are based on a template’s configuration and can then be submitted to the back-end Enterprise Reporting Service on a reoccurring schedule to generate reports. These results can then be previewed in tabular form and downloaded in JSON or CSV format.
Creating a Query
To create a query, navigate to the Create a New Query section of the page, select a template configuration, and click Next to open a modal.
After you provide a unique name for the query and an optional description, click OK to save your new query. You will be automatically navigated to view it.
Editing a Query
To view or edit a query, click its name under Stored Report Items on the right of the page to be navigated to the Query View.
Within this view, you can edit its name and description, set a schedule to act as the base configuration for Scheduled Items, and view the various filters set by the template this query was based on.
To save any changes to the query, click Save Query or Save Query and Schedule Report.
Setting a Schedule
In order to set or modify a query’s schedule, click Add/Change Schedule to open a modal.
Reports can be generated daily, weekly, or monthly at a time of your choosing. This can be set according to your timezone or UTC. By default, the schedule is set for weekly on Mondays at 12PM your time.
When scheduling reports to be generated monthly, note that multiple days of the month can be selected and that certain days (the 31st, for example) may not trigger every month.
In the top-right corner of the modal, you can toggle the enabled state of the schedule which determines whether reports will be executed continuously on the timed interval you saved. Note that pressing OK modifies the schedule but does not save it to the query. Please click the Save Query or Save Query and Schedule Report to do so.
Deleting a Query
To delete a query, click the red “x” to the left of its name under Stored Report Items and click Yes to remove it. Note that every scheduled report associated with that query will also be removed and not be recoverable.
Scheduled Reports
Adding a Scheduled Item
Once you’ve crafted a query based on a system or custom template, supplied any filters to refine the results, and previewed the report generated to ensure it is to your satisfaction, you can add it to be scheduled by clicking Save Query and Schedule Report.
Any schedules created from this view will be listed at the bottom.
Editing a Scheduled Item
To edit a scheduled item, click on Tools within that entry’s Actions column and select Edit Scheduled Item to open a modal.
Here, you can modify the name, description, and schedule for that item. Click Yes to save any new changes or Cancel to discard them.
Deleting a Scheduled Item
To delete a scheduled item, click on Tools within that entry’s Actions column and select Delete Scheduled Item. Note that every report generated from that schedule will also be removed upon clicking Yes and will not be recoverable.
Viewing Results
Click View under a scheduled item’s Actions column to expand the row and view its list of associated reports sorted by most recent. Click View or Tools > View Results to navigate to that report’s results.
If you configured notifications to be sent when a report has been executed, you can navigate to the report’s results by clicking the link provided in its notification.
Downloading results
A preview of up to 1000 result items are shown in tabular form which provides optional sorting by clicking on any column header. If a report contains more than 1000 results, please download the data to view the full report. To do so, click Download to JSON or Download to CSV based on your preferred format.
Various metadata such as the report type, any filters used when querying, and the timestamp of the report are included with your results. Please note that depending on the size of your data, the download may take a while.
Configure Notifications
To be notified whenever a report has been generated, navigate to Events & Notifications > Manage Notifications. Once any previous notification configurations have loaded, add a new one from your preferred endpoint (Email, Slack, etc), and select the predefined event selector option for Scheduled Reports.
This includes the availability of a new result or any report execution failures.
Once you receive a notification, click on the link provided to automatically navigate to the UI to view the results for that report.
6.4 - Saved Reports
Overview
The Saved Reports tab in the Reports view is where you can view, configure, download, or delete reports that have been saved for future use. Each report entry may contain zero or more results, depending on whether the report has been run or not.
Note: The Saved Reports tab will be the default tab selected in the Reports view when you have one or more saved reports.
Viewing a Report
An example of the Saved Reports tab is shown below:
Clicking anywhere within the row other than on an active report title or on the Actions button will expand it, displaying the executions for that report if any are available. Clicking an active report title will take you to a view displaying the latest execution for that report. An inactive report title indicates that no results are yet available.
If a report has been scheduled but has no executions, the expanded row will look like the following example:
Reports with one or more executions will look like the following example:
In the above example you can see a list of previously executed reports. Their completion status is indicated by the green check mark. Reports that are still in progress are indicated by a spinning icon. Reports that are queued for execution are indicated by an hourglass icon. The reports shown here are all complete, so they can be downloaded by clicking the Download Full Report button. Incomplete, queued, or failed reports cannot be downloaded.
The initial view shows up to four reports, with any older items being viewable by clicking the View More button. The View More button will disappear when there are no more reports to show. In addition:
Clicking the Refresh List button will refresh the list of reports, including any executions that may have completed since the last time the list was refreshed. Clicking the Generate Now button will generate a new execution of the report.
Individual report items can be deleted by clicking the Delete button. If the topmost report item is deleted, the link in the table row will correspond to the next report item in the list (if any are available).
Note: Deleting all the execution entries for a report will not delete the report itself. The report will still be available for future executions.
Tools Dropdown
Each report row has a Tools control that allows you to perform the following actions:
- Configure: Opens the report configuration popup, allowing you to change the report name, description, and schedule
- Generate Now: Generates a new execution of the report
- Save as Template: Saves the report as a user template, allowing you to use it as the basis for future reports
- Delete: Removes the report and any associated executions. If all reports are deleted, the page will transition to the New Reports tab and the Saved Reports tab will be disabled.
6.5 - Templates
Overview
The Templates tab in the Reports view is where you can view and manage report templates. Templates provide the basis for creating the reports executed by the system and specify which filters are applied to the retrieved dataset and how the returned data is shaped.
A number of system templates are provided with the application and all of these and can be used as-is, or as a starting point for creating your own user templates.
Viewing Templates
An example of the System Templates view in the Templates tab is shown below:
In this view you can see all the system templates provided by default, and their associated descriptions. System templates cannot be deleted, but can be copied and modified to create your own user templates.
An alternate way of creating a new user template is by clicking the Create New Template button. You will be presented with a dialog that allows you to select an existing system template as your starting point, or base your composition on any of the custom templates created by you or other users:
Selecting a template from the provided dropdown will open the Create a New Template dialog:
Within this dialog you can provide a unique name and optional description for the new template. In addition, you can modify the filters available when composing reports based on this template, and the columns that will be displayed in the resulting report:
Filters: You can add or remove filters, set default values, and specify if the filter is optional or required. Filters are displayed from left to right when composing a report—you can change the display order by clicking on a row hotspot and dragging the row item up or down the list.
Columns: You can add or remove columns, change their display order, or provide custom column names to be used when the data is presented in the tabular form offered by comma-separated variable (CSV) file downloads. Columns are displayed from left to right within a report table—you can change the display order by clicking on a row hotspot and dragging the row item up or down the list. Note that templates must contain at least one column.
Once you have configured the filters and columns, you can specify if the report will be scoped to return results against the analysis data in either the current selected account or from all accounts, and click OK. The new template will be added to the list of available user templates.
Custom Templates
The custom templates view shows all user-defined templates present in the current selected account. An example of the Custom Templates view is shown below:
Unlike system templates, custom templates can be edited or deleted in addition to being copied. Clicking the Tools button for a custom template will display the following options:
Note that any changes you make to templates in this view, or any new entries you create, will be available to all users in the current selected account.
7 - False Positives
7.1 - Hints
7.2 - Corrections
When Anchore analyzes an image, it reports a Software Bill of Materials (SBOM) to be stored and later scanned in order to match package metadata against known vulnerabilities. One aspect of the SBOM is a best effort guess of the CPE (Common Platform Enumeration) for a given package. The Anchore analyzer builds a list of CPEs for each package based on the metadata that is available (ex. for Java packages, the manifest, which contains multiple different version specifications among other metadata), but sometimes gets this wrong.
For example, Java Spring packages are generally reported as follows:
- Spring Core, version 5.1.4
cpe:2.3:a:*:spring-core:5.1.4:*:*:*:*:*:*:*
However, since Spring is a framework built by Pivotal Software, the CPE referenced in the NVD database looks more like:
cpe:2.3:a:pivotal_software:spring_security:5.1.4:*:*:*:*:*:*:*
To facilitate this correction, Anchore provides the Correction feature. Now, a user can provide a correction that will update a given package’s metadata so that attributes (including CPEs) can be corrected when Anchore does a vulnerability scan
Using the above example, a user can add a correction as using anchorectl
or via HTTP POST to the /corrections
endpoint:
{
"description": "Update Spring Core CPE",
"match": {
"type": "java",
"field_matches": [
{
"field_name": "package",
"field_value": "spring-core"
},
{
"field_name": "implementation-version",
"field_value": "5.1.4.RELEASE"
}
]
},
"replace": [
{
"field_name": "cpes",
"field_value": "cpe:2.3:a:pivotal_software:spring_security:5.1.4:*:*:*:*:*:*:*"
}
],
"type": "package"
}
JSON Reference:
- description: A description of the correction being added (for note taking purposes)
- replace: a list of field name/value pairs to replace.
- type: The type of correction being added. Currently only “package” is supported
- match:
- type: The type of package to match upon. Supported values are based on the type of content available to images being analyzed (ex. java, gem, python, npm, os, go, nuget)
- field_matches: A list of name/value pairs based on which package metadata fields to match this correction upon
- The schema of the fields to match can be found by outputting the direct JSON content for the given content type:
- Ex. Java Package Metadata JSON:
{ "cpes": [ "cpe:2.3:a:*:spring-core:5.1.4.RELEASE:*:*:*:*:*:*:*", "cpe:2.3:a:*:spring-core:5.1.4.RELEASE:*:*:*:*:java:*:*", "cpe:2.3:a:*:spring-core:5.1.4.RELEASE:*:*:*:*:maven:*:*", "cpe:2.3:a:spring-core:spring-core:5.1.4.RELEASE:*:*:*:*:*:*:*", "cpe:2.3:a:spring-core:spring-core:5.1.4.RELEASE:*:*:*:*:java:*:*", "cpe:2.3:a:spring-core:spring-core:5.1.4.RELEASE:*:*:*:*:maven:*:*" ], "implementation-version": "5.1.4.RELEASE", "location": "/app.jar:BOOT-INF/lib/spring-core-5.1.4.RELEASE.jar", "maven-version": "N/A", "origin": "N/A", "package": "spring-core", "specification-version": "N/A", "type": "JAVA-JAR" }
- The schema of the fields to match can be found by outputting the direct JSON content for the given content type:
Note: if a new field is specified here, it will be added to the content output when the correction is matched. See below for additional functionality around CPEs.
To add the above JSON using anchorectl the following command can be used
anchorectl correction add -i path-to-file.json
You could also achieve something similar using
anchorectl correction add \
--match package=spring-core \
--match implementation-version="5.1.4.RELEASE" \
--type java \
--replace cpes="cpe:2.3:pivotal_software:spring_security:5.1.4:*:*:*:*:*:*:*" \
--replace description="Update Spring Core CPE"
Don’t forget you can list, delete and get a correction with the anchorectl
The command to retrieve a list of existing corrections is:
anchorectl correction list
The command to delete a corrections is:
anchorectl correction delete {correction_id}
# {correction_id} is the UUID of the correction you wish to delete
The command to get a correction is:
anchorectl correction get {correction_id}
# {correction_id} is the UUID of the correction you wish to get
The result of the correction can be checked using the image content
command of anchorectl. For example to see our above java correction we would run
anchorectl image content -t java Image_sha256_ID -o json
We would see the spring-core package returned as having the CPE cpe:2.3:a:pivotal_software:spring_security:5.1.4:*:*:*:*:*:*:*
Note: Don’t forget to replace the Image_sha256_ID with the image ID you’re trying to test.
Corrections may be updated and deleted via the API as well. Creation of a Correction generates a UUID that may be used to reference that Correction later. Refer to the Enterprise Swagger spec for more details.
CPE Templating
CPE replacement can be templated based on the other fields of the package as well. In the above example, a replacement could have been provided as follows:
{
"field_name": "cpes",
"field_value": "cpe:2.3:a:pivotal_software:spring_security:{implementationVersion}:*:*:*:*:*:*:*"
}
For the “cpes” field only, Anchore Enterprise can recognize a templated field via curly braces “{}”. Package JSON keys contained here will be replaced with their corresponding value.
Vulnerability Matching Configuration
Search by CPE can be globally configured per supported ecosystem via the anchore enterprise policy engine config. The default enables search by cpe for all ecosystems except for javascript (since NPM package vulnerability reports are exhaustively covered by the GitHub Security Advisory Database).
A fully-specified default config is as below:
policy_engine:
vulnerabilities:
matching:
default:
search:
by_cpe:
enabled: true
ecosystem_specific:
dotnet:
search:
by_cpe:
enabled: true
golang:
search:
by_cpe:
enabled: true
java:
search:
by_cpe:
enabled: true
javascript:
search:
by_cpe:
enabled: false
python:
search:
by_cpe:
enabled: true
ruby:
search:
by_cpe:
enabled: true
stock:
search:
by_cpe:
# Disabling search by CPE for the stock matcher will entirely disable binary-only matches
# and is *NOT ADVISED*
enabled: true
A shorter form of the default config is:
policy_engine:
vulnerabilities:
matching:
default:
search:
by_cpe:
enabled: true
ecosystem_specific:
javascript:
search:
by_cpe:
enabled: false
If disabling search by CPE for all GitHub covered ecosystems is desired, the config would look like:
policy_engine:
vulnerabilities:
matching:
default:
search:
by_cpe:
enabled: false
ecosystem_specific:
stock:
search:
by_cpe:
enabled: true