Using the AnchoreCTL
AnchoreCTL provides a command line interface on top of the REST API and is published as a golang executable. Using AnchoreCTL users can manage and inspect images, policies, subscriptions, and registries.
If you have not installed AnchoreCTL, please refer to the deployment guide.
To jump to a particular guide, select from the following below:
1 - Using the Analysis Archive
As mentioned in concepts, there are two locations for image analysis to be stored:
- The working set: the standard state after analysis completes. In this location, the image is fully loaded and available for policy evaluation, content, and vulnerability queries.
- The archive set: a location to keep image analysis data that cannot be used for policy evaluation or queries but can use cheaper storage and less db space and can be reloaded into the working set as needed.
Working with the Analysis Archive
List archived images:
anchorectl archive image list
✔ Fetched archive-images
┌─────────────────────────────────────────────────────────────────────────┬────────────────────────┬──────────┬──────────────┬──────────────────────┐
│ IMAGE DIGEST │ TAGS │ STATUS │ ARCHIVE SIZE │ ANALYZED AT │
├─────────────────────────────────────────────────────────────────────────┼────────────────────────┼──────────┼──────────────┼──────────────────────┤
│ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ docker.io/nginx:latest │ archived │ 1.4 MB │ 2022-08-23T21:08:29Z │
└─────────────────────────────────────────────────────────────────────────┴────────────────────────┴──────────┴──────────────┴──────────────────────┘
To add an image to the archive, use the digest. All analysis, policy evaluations, and tags will be added to the archive.
NOTE: this does not remove it from the working set. To fully move it you must first archive and then delete image in the working set using AnchoreCTL or the API directly.
Archiving Images
Archiving an image analysis creates a snapshot of the image’s analysis data, policy evaluation history, and tags and stores in a different storage location and
different record location than working set images.
# anchorectl image list
✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────┬────────┐
│ TAG │ DIGEST │ ANALYSIS │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────┼────────┤
│ docker.io/ubuntu:latest │ sha256:33bca6883412038cc4cbd3ca11406076cf809c1dd1462a144ed2e38a7e79378a │ analyzed │ active │
│ docker.io/ubuntu:latest │ sha256:42ba2dfce475de1113d55602d40af18415897167d47c2045ec7b6d9746ff148f │ analyzed │ active │
│ docker.io/localimage:latest │ sha256:74c6eb3bbeb683eec0b8859bd844620d0b429a58d700ea14122c1892ae1f2885 │ analyzed │ active │
│ docker.io/nginx:latest │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────┴────────┘
# anchorectl archive image add sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
✔ Added image to archive
┌─────────────────────────────────────────────────────────────────────────┬──────────┬────────────────────────┐
│ DIGEST │ STATUS │ DETAIL │
├─────────────────────────────────────────────────────────────────────────┼──────────┼────────────────────────┤
│ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ archived │ Completed successfully │
└─────────────────────────────────────────────────────────────────────────┴──────────┴────────────────────────┘
Then to delete it in the working set (optionally):
NOTE: You may need to use –force if the image is the newest of its tags and has active subscriptions_
# anchorectl image delete sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc --force
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
At this point the image in the archive only.
Restoring images from the archive into the working set
This will not delete the archive entry, only add it back to the working set. Restore and image to working set from archive:
# anchorectl archive image restore sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
✔ Restore image
┌────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────┬────────┐
│ TAG │ DIGEST │ ANALYSIS │ STATUS │
├────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────┼────────┤
│ docker.io/nginx:latest │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ analyzed │ active │
└────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────┴────────┘
To view the restored image:
# anchorectl image get sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
Tag: docker.io/nginx:latest
Digest: sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
ID: 2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
Analysis: analyzed
Status: active
Working with Archive rules
As with all AnchoreCTL commands, the --help
option will show the arguments, options and descriptions of valid values.
List existing rules:
# anchorectl archive rule list
✔ Fetched rules
┌──────────────────────────────────┬────────────┬──────────────┬────────────────────┬────────────┬─────────┬───────┬──────────────────┬──────────────┬─────────────┬──────────────────┬────────┬──────────────────────┐
│ ID │ TRANSITION │ ANALYSIS AGE │ TAG VERSIONS NEWER │ REGISTRY │ REPO │ TAG │ REGISTRY EXCLUDE │ REPO EXCLUDE │ TAG EXCLUDE │ EXCLUDE EXP DAYS │ GLOBAL │ LAST UPDATED │
├──────────────────────────────────┼────────────┼──────────────┼────────────────────┼────────────┼─────────┼───────┼──────────────────┼──────────────┼─────────────┼──────────────────┼────────┼──────────────────────┤
│ 2ca9284202814f6aa41916fd8d21ddf2 │ archive │ 90d │ 90 │ * │ * │ * │ │ │ │ -1 │ false │ 2022-08-19T17:58:38Z │
│ 6cb4011b102a4ba1a86a5f3695871004 │ archive │ 90d │ 90 │ foobar.com │ myimage │ mytag │ barfoo.com │ * │ * │ -1 │ false │ 2022-08-22T18:47:32Z │
└──────────────────────────────────┴────────────┴──────────────┴────────────────────┴────────────┴─────────┴───────┴──────────────────┴──────────────┴─────────────┴──────────────────┴────────┴──────────────────────┘
Add a rule:
anchorectl archive rule add --transition archive --analysis-age-days 90 --tag-versions-newer 1 --selector-registry 'docker.io' --selector-repository 'library/*' --selector-tag 'latest'
✔ Added rule
ID: 0031546b9ce94cf0ae0e60c0f35b9ea3
Transition: archive
Analysis Age: 90d
Tag Versions Newer: 1
Selector:
Registry: docker.io
Repo: library/*
Tag: latest
Exclude:
Selector:
Registry Exclude:
Repo Exclude:
Tag Exclude:
Exclude Exp Days: -1
Global: false
Last Updated: 2022-08-24T22:57:51Z
The required parameters are: minimum age of analysis in days, number of tag versions newer, and the transition to use.
There is also an optional --system-global
flag available for admin account users that makes the rule apply to all accounts
in the system.
As a non-admin user you can see global rules but you cannot update/delete them (will get a 404):
# ANCHORECTL_USERNAME=test1user ANCHORECTL_PASSWORD=password ANCHORECTL_ACCOUNT=test1acct anchorectl archive rule list
✔ Fetched rules
┌──────────────────────────────────┬────────────┬──────────────┬────────────────────┬───────────┬───────────┬────────┬──────────────────┬──────────────┬─────────────┬──────────────────┬────────┬──────────────────────┐
│ ID │ TRANSITION │ ANALYSIS AGE │ TAG VERSIONS NEWER │ REGISTRY │ REPO │ TAG │ REGISTRY EXCLUDE │ REPO EXCLUDE │ TAG EXCLUDE │ EXCLUDE EXP DAYS │ GLOBAL │ LAST UPDATED │
├──────────────────────────────────┼────────────┼──────────────┼────────────────────┼───────────┼───────────┼────────┼──────────────────┼──────────────┼─────────────┼──────────────────┼────────┼──────────────────────┤
│ 16dc38cef54e4ce5ac87d00e90b4a4f2 │ archive │ 90d │ 1 │ docker.io │ library/* │ latest │ │ │ │ -1 │ true │ 2022-08-24T23:01:05Z │
└──────────────────────────────────┴────────────┴──────────────┴────────────────────┴───────────┴───────────┴────────┴──────────────────┴──────────────┴─────────────┴──────────────────┴────────┴──────────────────────┘
# ANCHORECTL_USERNAME=test1user ANCHORECTL_PASSWORD=password ANCHORECTL_ACCOUNT=test1acct anchorectl archive rule delete 16dc38cef54e4ce5ac87d00e90b4a4f2
⠙ Deleting rule
error: 1 error occurred:
* unable to delete rule:
{
"detail": {
"error_codes": []
},
"httpcode": 404,
"message": "Rule not found"
}
# ANCHORECTL_USERNAME=test1user ANCHORECTL_PASSWORD=password ANCHORECTL_ACCOUNT=test1acct anchorectl archive rule get 16dc38cef54e4ce5ac87d00e90b4a4f2
✔ Fetched rule
ID: 16dc38cef54e4ce5ac87d00e90b4a4f2
Transition: archive
Analysis Age: 90d
Tag Versions Newer: 1
Selector:
Registry: docker.io
Repo: library/*
Tag: latest
Exclude:
Selector:
Registry Exclude:
Repo Exclude:
Tag Exclude:
Exclude Exp Days: -1
Global: true
Last Updated: 2022-08-24T23:01:05Z
Delete a rule:
# anchorectl archive rule delete 16dc38cef54e4ce5ac87d00e90b4a4f2
✔ Deleted rule
No results
2 - Analyzing Images
Introduction
In this section you will learn how to analyze images with Anchore Enterprise using AnchoreCTL in two different ways:
- Distributed Analysis: Content analysis by AnchoreCTL where it is run and importing the analysis to your Anchore deployment
- Centralized Analysis: The Anchore deployment downloads and analyzes the image content directly
Using AnchoreCTL for Centralized Analysis
Overview
This method of image analysis uses the Enterprise deployment itself to download and analyze the image content. You’ll use AnchoreCTL to make API requests to Anchore to tell it which image to analyze but the Enterprise deployment does the work.
You can refer to the Image Analysis Process document in the concepts section to better understand how centralized analysis works in Anchore.
sequenceDiagram
participant A as AnchoreCTL
participant R as Registry
participant E as Anchore Deployment
A->>E: Request Image Analysis
E->>R: Get Image content
R-->>E: Image Content
E->>E: Analyze Image Content (Generate SBOM and secret scans etc) and store results
E->>E: Scan sbom for vulns and evaluate compliance
Usage
The anchorectl image add
command instructs the Anchore Enterprise deployment to pull (download) and analyze an image from a registry. Anchore Enterprise will attempt to retrieve metadata about the image from the Docker registry and if successful will initiate a pull of the image and queue the image for analysis. The command will output details about the image including the image digest, image ID, and full name of the image.
# anchorectl image add docker.io/library/nginx:latest
anchorectl image add docker.io/library/nginx:latest
✔ Added Image
Image:
status: not-analyzed (active)
tag: docker.io/library/nginx:latest
digest: sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
id: 2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
For an image that has not yet been analyzed, the status will appear as not_analyzed. Once the image has been downloaded it will be queued for analysis. When the analysis begins the status will be updated to analyzing, after which te status will update to analyzed.
Anchore Enterprise can be configured to have a size limit for images being added for analysis. Attempting to add an image that exceeds the configured size will fail, return a 400 API error, and log an error message in the catalog service detailing the failure. This feature is disabled by default so see documentation for additional details on the functionality of this feature and instructions on how to configure the limit.
Using AnchoreCTL for Distributed Analysis
Overview
This way of adding images uses anchorectl to performs analysis of an image outside the Enterprise deployment, so the Enterprise deployment never
downloads or touches the image content directly. The generation of the SBOM, secret searches, filesystem metadata, and content searches are all
performed by AnchoreCTL on the host where it is run (CI, laptop, runtime node, etc) and the results are imported to the Enterprise deployment where it can be scanned for vulnerabilities and evaluated against policy.
sequenceDiagram
participant A as AnchoreCTL
participant R as Registry/Docker Daemon
participant E as Anchore Deployment
A->>R: Get Image content
R-->>A: Image Content
A->>A: Analyze Image Content (Generate SBOM and secret scans etc)
A->>E: Import SBOM, secret search, fs metadata
E->>E: Scan sbom for vulns and evaluate compliance
Configuration
Enabling the full set of analyzers, “catalogers” in AnchoreCTL terms, requires updates to the config file used by AnchoreCTL. See Configuring AnchoreCTL for more information on the format and options.
Usage
Note
To locally analyze an image that has been pushed to a registry, it is strongly recommended to use the ‘–from registry’ rather than ‘–from docker’.
This removes the need to have docker installed and also results in a consistent image digest for later use. The registry option gives anchorectl access
to data that the docker source does not due to limitations with the Docker Daemon itself and how it handles manifests and image digests.
The anchorectl image add --from [registry|docker]
command will run a local SBOM-generation and analysis (secret scans, filesystem metadata, and content searches) and upload the result to Anchore Enterprise without ever having that image touched or loaded by your Enterprise deployment.
# anchorectl image add docker.io/library/nginx:latest --from registry
anchorectl image add docker.io/library/nginx:latest --from registry -n
✔ Added Image
Image:
status: not-analyzed (active)
tag: docker.io/library/nginx:latest
digest: sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
id: 2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
For an image that has not yet been analyzed, the status will appear as not_analyzed. Once the image has been downloaded it will be queued for analysis. When the analysis begins the status will be updated to analyzing, after which te status will update to analyzed.
The ‘–platform’ option in distributed analysis specifies a different platform than the local hosts’ to use when retrieving the image from the registry for analysis by AnchoreCTL.
# anchorectl image add alpine:latest --from registry --platform linux/arm64
Adding images that you own
For images that you are building yourself, the Dockerfile used to build the image should always be passed to Anchore Enterprise at the time of image addition. This is achieved by adding the image as above, but with the additional option to pass the Dockerfile contents to be stored with the system alongside the image analysis data.
This can be achieved in both analysis modes.
For centralized analysis:
# anchorectl image add myrepo.example.com:5000/app/webapp:latest --dockerfile /path/to/Dockerfile
For distributed analysis:
# anchorectl image add myrepo.example.com:5000/app/webapp:latest --from registry --dockerfile /path/to/Dockerfile
To update an image’s Dockerfile, simply run the same command again with the path to the updated Dockerfile along with ‘–force’ to re-analyze the image with the updated Dockerfile. Note that running add
without --force
(see below) will not re-add an image if it already exists.
Providing Dockerfile content is supported in both push and pull modes for adding images.
Additional Options
When adding an image, there are some additional (optional) parameters that can be used. We show some examples below and all apply to both distributed and centralize analysis workflows.
# anchorectl image add docker.io/library/alpine:latest --force
✔ Added Image docker.io/library/alpine:latest
Image:
status: not-analyzed (active)
tags: docker.io/alpine:3
docker.io/alpine:latest
docker.io/dnurmi/testrepo:test0
docker.io/library/alpine:latest
digest: sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870
id: 9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5
distro: [email protected] (amd64)
layers: 1
the --force
option can be used to reset the image analysis status of any image to not_analyzed, which is the base analysis state for an image. This option shouldn’t be necessary to use in normal circumstances, but can be useful if image re-analysis is needed for any reason desired.
# anchorectl image add myrepo.example.com:5000/app/webapp:latest --dockerfile /path/to/dockerfile --annotation owner=someperson --annotation [email protected]
the --annotation
parameter can be used to specify ‘key=value’ pairs to associate with the image at the time of image addition. These annotations will then be carried along with the tag, and will appear in image records when fetched, and in webhook notification payloads that contain image information when they are sent from the system. To change an annotation, simply run the add command again with the updated annotation and the old annotation will be overriden.
# anchorectl image add alpine:latest --no-auto-subscribe
the ‘–no-auto-subscribe’ flag can be used if you do not wish for the system to automatically subscribe the input tag to the ’tag_update’ subscription, which controls whether or not the system will automatically watch the added tag for image content updates and pull in the latest content for centralized analysis. See Subscriptions for more information about using subscriptions and notifications in Anchore.
These options are supported in both distributed and centralized analysis.
In this example, we’re adding docker.io/mysql:latest
, if we attempt to add a tag that mapped to the same image, for example docker.io/mysql:8
Anchore Enterprise will detect the duplicate image identifiers and return a detail of all tags matching that image.
Image:
status: analyzed (active)
tags: docker.io/mysql:8
docker.io/mysql:latest
digest: sha256:8191525e9110aa32b436a1ec772b76b9934c1618330cdb566ca9c4b2f01b8e18
id: 4390e645317399cc7bcb50a5deca932a77a509d1854ac194d80ed5182a6b5096
distro: [email protected] (amd64)
layers: 11
Deleting An Image
The following command instructs Anchore Enterprise to delete the image analysis from the working set using a tag. The --force
option must be used if there is only one digest associated with the provided tag, or any active subscriptions are enabled against the referenced tag.
# anchorectl image delete mysql:latest --force
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:8191525e9110aa32b436a1ec772b76b9934c1618330cdb566ca9c4b2f01b8e18 │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
To delete a specific image record, the digest can be supplied instead to ensure it is the exact image record you want:
# anchorectl image delete sha256:899a03e9816e5283edba63d71ea528cd83576b28a7586cf617ce78af5526f209
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:899a03e9816e5283edba63d71ea528cd83576b28a7586cf617ce78af5526f209 │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
Deactivate Tag Subscriptions
Check if the tag has any active subscriptions.
# anchorectl subscription list
anchorectl subscription list
✔ Fetched subscriptions
┌──────────────────────────────────────────────────────────────────────┬─────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────────────────────────────────────────────────┼─────────────────┼────────┤
│ docker.io/alpine:latest │ policy_eval │ false │
│ docker.io/alpine:3.12.4 │ policy_eval │ false │
│ docker.io/alpine:latest │ vuln_update │ false │
│ docker.io/redis:latest │ policy_eval │ false │
│ docker.io/centos:8 │ policy_eval │ false │
...
...
If the tag has an active subscription(s), then can disabled (deactivated) in order to permit deletion:
# anchorectl subscription deactivate docker.io/alpine:3.12.6 tag_update
✔ Deactivate subscription
Key: docker.io/alpine:3.12.6
Type: tag_update
Id: a6c7559deb7d5e20621d4a36010c11b0
Active: false
Advanced
Anchore Enterprise also allows adding images directly by digest / tag / timestamp tuple, which can be useful to add images that are still available in a registry but not associated with a current tag any longer.
To add a specific image by digest with the tag it should be associated with:
anchorectl image add docker.io/nginx:stable@sha256:f586d972a825ad6777a26af5dd7fc4f753c9c9f4962599e6c65c1230a09513a8
Note: this will submit the specific image by digest with the associated tag, but Anchore will treat that digest as the most recent digest for the tag, so if the image registry actually has a different history (e.g. a newer image has been pushed to that tag), then the tag history in Anchore may not accurately reflect the history in the registry.
Next Steps
Next, let’s find out how to Inspect Image Content
2.1 - Inspecting Image Content
Introduction
During the analysis of container images, Anchore Enterprise performs deep inspection, collecting data on all artifacts in the image including files, operating system packages and software artifacts such as Ruby GEMs and Node.JS NPM modules.
Inspecting images
The image content
command can be used to return detailed information about the content of the container image.
# anchorectl image content INPUT_IMAGE -t CONTENT_TYPE
The INPUT_IMAGE can be specified in one of the following formats:
- Image Digest
- Image ID
- registry/repo:tag
the CONTENT_TYPE can be one of the following types:
- os: Operating System Packages
- files: All files in the image
- go: GoLang modules
- npm: Node.JS NPM Modules
- gem: Ruby GEMs
- java: Java Archives
- python: Python Artifacts
- nuget: .NET NuGet Packages
- binary: Language runtime locations and version (e.g. openjdk, python, node)
- malware: ClamAV mailware scan results, if enabled
You can always get the latest available content types using the ‘-a’ flag:
# anchorectl image content library/nginx:latest -a
✔ Fetched content [fetching available types] library/nginx:latest
binary
files
gem
go
java
malware
npm
nuget
os
python
For example:
# anchorectl image content library/nginx:latest -t files
✔ Fetched content [0 packages] [6099 files] library/nginx:latest
Files:
┌────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┬───────┬─────┬─────┬───────┬───────────────┬──────────────────────────────────────────────────────────────────┐
│ FILE │ LINK │ MODE │ UID │ GID │ TYPE │ SIZE │ SHA256 DIGEST │
├────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┼───────┼─────┼─────┼───────┼───────────────┼──────────────────────────────────────────────────────────────────┤
│ /bin │ │ 00755 │ 0 │ 0 │ dir │ 0 │ │
│ /bin/bash │ │ 00755 │ 0 │ 0 │ file │ 1.234376e+06 │ d86b21405852d8642ca41afae9dcf0f532e2d67973b0648b0af7c26933f1becb │
│ /bin/cat │ │ 00755 │ 0 │ 0 │ file │ 43936 │ e9165e34728e37ee65bf80a2f64cd922adeba2c9f5bef88132e1fc3fd891712b │
│ /bin/chgrp │ │ 00755 │ 0 │ 0 │ file │ 72672 │ f47bc94792c95ce7a4d95dcb8d8111d74ad3c6fc95417fae605552e8cf38772c │
│ /bin/chmod │ │ 00755 │ 0 │ 0 │ file │ 64448 │ b6365e442b815fc60e2bc63681121c45341a7ca0f540840193ddabaefef290df │
│ /bin/chown │ │ 00755 │ 0 │ 0 │ file │ 72672 │ 4c1443e2a61a953804a462801021e8b8c6314138371963e2959209dda486c46e │
...
AnchoreCTL will output a subset of fields from the content view, for example for files
on the file name and size are displayed. To retrieve the full output the --json
parameter should be passed.
For example:
# anchorectl -o json image content library/nginx:latest -t files
✔ Fetched content [0 packages] [6099 files] library/nginx:latest
{
"files": [
{
"filename": "/bin",
"gid": 0,
"linkdest": null,
"mode": "00755",
"sha256": null,
"size": 0,
"type": "dir",
"uid": 0
},
...
Next Steps
2.2 - Viewing Security Vulnerabilities
Introduction
The image vulnerabilities
command can be used to return a list of vulnerabilities found in the container image.
# anchorectl image vulnerabilities INPUT_IMAGE -t VULN_TYPE
The INPUT_IMAGE
can be specified in one of the following formats:
- Image Digest
- Image ID
- registry/repo:tag
The VULN_TYPE
currently supports:
- os: Vulnerabilities against operating system packages (RPM, DPKG, APK, etc.)
- non-os: Vulnerabilities against language packages (NPM, GEM, Java Archive (jar, war, ear), Python PIP, .NET NuGet, etc.)
- all: Combination report containing both ‘os’ and ’non-os’ vulnerability records.
The system has been designed to incorporate 3rd party feeds for other vulnerabilites.
Examples
To generate a report of OS package (RPM/DEB/APK) vulnerabilities found in the image including CVE identifier, Vulnerable Package, Severity Level, Vulnerability details and version of fixed package (if available).
# anchorectl image vulnerabilities debian:latest -t os
Currently the following the system draws vulnerability data specifically matched to the following OS distros:
- Alpine
- CentOS
- Debian
- Oracle Linux
- Red Hat Enterprise Linux
- Red Hat Universal Base Image (UBI)
- Ubuntu
- Suse Linux
- Amazon Linux 2
- Google Distroless
To generate a report of language package (NPM/GEM/Java/Python) vulnerabilities, the system draws vulnerability data from the NVD data feed, and vulnerability reports can be viewed using the ’non-os’ vulnerability type:
# anchorectl image vulnerabilities node:latest -t non-os
To generate a list of all vulnerabilities that can be found, regardless of whether they are against an OS or non-OS package type, the ‘all’ vulnerability type can be used:
# anchorectl image vulnerabilities node:latest -t all
Finally, for any of the above queries, these commands (and other anchorectl commands) can be passed the -o json
flag to output the data in JSON format:
# anchorectl -o json image vulnerabilities node:latest -t all
Other options can be reviewed by issuing anchorectl image vulnerabilities --help
at any time.
Next Steps
- Evaluate the image against policies you create.
- Subscribe to receive notifications when the image is updated, when the policy status changes or when new vulnerabilities are detected.
3 - Working with Policies
Introduction
Policies are central to the concept of Anchore Enterprise, this article provides information on how to create, delete, update, and describe policies using AnchoreCTL to interact with a running Anchore Enterprise deployment.
At a high-level Anchore Enterprise consumes policies store in a Policy that contain:
- Policies
- Allowlists
- Mappings
- Allowlisted Images
- Denylisted Images
Anchore Enterprise can store multiple policies for each account, but only one policy can be active at any point in time. All users within an account share the same set of policies. It is common to store historic policies to allow previous policies and evaluations to be inspected. The active policy is the one used for evaluation for notifications, incoming kubernetes webhooks (unless configured otherwise), and other automatic system functions, but a user may request evaluation of any policy stored in the system using its id.
For more information on the content and semantics of policies see: Policies and Evaluation
Creating Policies
Policies are just JSON documents. Anchore Enterprise includes a default policy configured at installation that performs basic CVE checks as well as some Dockerfile checks.
To create custom polices, you may:
- Edit JSON manually and upload a file
- Use the Anchore Enterprise UI to edit policies
Managing Policies
Policies can be managed directly using the REST API or the anchorectl policy
command.
Adding Policies using AnchoreCTL
The anchorectl
tool allows you to upload policies to Anchore Enterprise.
# anchorectl policy add --input /path/to/policy/policy.json
Note: Adding a policy will not automatically set the policy to be active, you will need to activate the policy using the activate command.
Listing Policies
Anchore Enterprise may store multiple policies however at a given time only one policy may be active. Policies can be listed using the policy list
command.
# anchorectl policy list
✔ Fetched policies
┌────────────────┬──────────────────────────────────────┬────────┬──────────────────────┐
│ NAME │ POLICY ID │ ACTIVE │ UPDATED │
├────────────────┼──────────────────────────────────────┼────────┼──────────────────────┤
│ Default policy │ 2c53a13c-1765-11e8-82ef-23527761d060 │ true │ 2023-10-25T20:39:28Z │
│ devteam1policy │ da8208a2-c8ae-4cf2-a25b-a52b0cdcd789 │ false │ 2023-10-25T20:47:16Z │
└────────────────┴──────────────────────────────────────┴────────┴──────────────────────┘
Each policy has a unique ID that will be reference in policy evaluation reports.
Note: Times are reported in UTC.
Viewing Policies
Using the policy get
command, summary or detailed information about a policy can be retrieved. The policy is referenced using its unique id.
# anchorectl policy get 2c53a13c-1765-11e8-82ef-23527761d060
✔ Fetched policy
Name: Default policy
ID: 2c53a13c-1765-11e8-82ef-23527761d060
Comment: Default policy
Policies:
- artifactType: image
comment: System default policy
id: 48e6f7d6-1765-11e8-b5f9-8b6f228548b6
name: DefaultPolicy
rules:
- action: STOP
gate: dockerfile
id: ce7b8000-829b-4c27-8122-69cd59018400
params:
- name: ports
value: "22"
- name: type
value: denylist
trigger: exposed_ports
...
...
The policy can be downloaded in JSON format by passing the --detail
parameter.
# anchorectl policy get 2c53a13c-1765-11e8-82ef-23527761d060 --detail -o json-raw > policy.json
✔ Fetched policy
Activating Policies
The policy activate
command can be used to activate a policy. The policy is referenced using its unique id which can be retrieved using the policy list
command.
# anchorectl policy activate 2c53a13c-1765-11e8-82ef-23527761d061
✔ Activate policy
┌─────────────────┬──────────────────────────────────────┬────────┬──────────────────────┐
│ NAME │ POLICY ID │ ACTIVE │ UPDATED │
├─────────────────┼──────────────────────────────────────┼────────┼──────────────────────┤
│ Default policy │ 2c53a13c-1765-11e8-82ef-23527761d061 │ true │ 2023-10-25T20:50:17Z │
└─────────────────┴──────────────────────────────────────┴────────┴──────────────────────┘
Note: If Anchore Enterprise has been configured to automatically synchronize policies from the Anchore Cloud then the active policy may be overridden automatically during the next sync.
Deleting Policies
Policies can be deleted from Anchore Enterprise using the policy del
command The policy is referenced using its unique id. A policy marked as active cannot be deleted, another policy has to be marked active before deleting the currently active policy.
# anchorectl policy delete 2c53a13c-1765-11e8-82ef-23527761d061
✔ Deleted policy
No results
See Anchore Policy Checks for information about available policy gates and triggers in Anchore Enterprise.
3.1 - Anchore Policy Checks
Introduction
For an list of all available gates/triggers, refer to Anchore Policy Checks
3.2 - Evaluating Images Against Policies
Introduction
The evaluate
command can be used to evaluate a given image for policy compliance.
The image to be evaluated can be in the following format:
- Image Digest
- Image ID
- registry/repo:tag
Using the Evaluate command
# anchorectl image check docker.io/debian:latest
✔ Evaluated against policy [failed] docker.io/debian:latest
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:34:43Z
Evaluation: fail
By default only the summary of the evaluation is shown. Passing the --detail
parameter will show the policy checks that raised warnings or errors.
# anchorectl image check docker.io/debian:latest --detail
✔ Evaluated against policy [failed] docker.io/debian:latest
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:35:05Z
Evaluation: fail
Final Action: stop
Reason: policy_evaluation
Policy Evaluation Details:
┌─────────────────┬─────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐
│ GATE │ TRIGGER │ DESCRIPTION │ STATUS │
├─────────────────┼─────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤
│ dockerfile │ instruction │ Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check │ warn │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (dpkg) - libgnutls30 (CVE-2011-3389 - https://security-tracker.debian.org/tracker/CVE-2011-3389) │ warn │
│ vulnerabilities │ package │ CRITICAL Vulnerability found in os package type (dpkg) - zlib1g (CVE-2022-37434 - https://security-tracker.debian.org/tracker/CVE-2022-37434) │ stop │
└─────────────────┴─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘
In this example we specified library/repo:tag which could be ambiguous. At the time of writing the image Digest for library/debian:latest
was sha256:0fc.....
however previously different images may have been tagged as library/debian:latest
. The --history
parameter can be passed to show historic evaluations based on previous images or previous policies.
Anchore supports allowlisting and denylisting images by their name, ID or digest. A denylist or allowlist takes precedence over any policy checks. For example if an image is explicitly listed as denylisted then even if all the individual policy checks pass the image will still fail evaluation.
# anchorectl image check docker.io/debian:latest --detail
✔ Evaluated against policy [failed] docker.io/debian:latest
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:39:36Z
Evaluation: fail
Final Action: stop
Reason: denylisted
Policy Evaluation Details:
┌─────────────────┬─────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐
│ GATE │ TRIGGER │ DESCRIPTION │ STATUS │
├─────────────────┼─────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤
│ dockerfile │ instruction │ Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check │ warn │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (dpkg) - libgnutls30 (CVE-2011-3389 - https://security-tracker.debian.org/tracker/CVE-2011-3389) │ warn │
└─────────────────┴─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘
In this example even though the image only had one policy check that raised a warning the image fails policy evaluation since it is present on a denylist.
Evaluating status based on Digest or ID
Performing an evaluation on an image specified by name is not recommended since an image name is ambiguous. For example the tag docker.io/library/centos:latest
refers to whatever image has the tag library/centos:latest
at the time of evaluation. At any point in time another image may be tagged as library/centos:latest
.
It is recommended that images are referenced by their Digest. For example at the time of writing the digest of the ‘current’ library/centos:latest image is sha256:191c883e479a7da2362b2d54c0840b2e8981e5ab62e11ab925abf8808d3d5d44
If the image to be evaluated is specified by Image ID or Image Digest then the --tag
parameter must be added. Policies are mapped to images based on registry/repo:tag so since an Image ID may may to multiple different names we must specify the name user in the evaluation.
For example - referencing by Image Digest:
# anchorectl image check docker.io/debian@sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc --detail --tag docker.io/debian:latest
✔ Evaluated against policy [failed] docker.io/debian@sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:44:24Z
Evaluation: fail
Final Action: stop
Reason: denylisted
Policy Evaluation Details:
┌─────────────────┬─────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐
│ GATE │ TRIGGER │ DESCRIPTION │ STATUS │
├─────────────────┼─────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤
│ dockerfile │ instruction │ Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check │ warn │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (dpkg) - libgnutls30 (CVE-2011-3389 - https://security-tracker.debian.org/tracker/CVE-2011-3389) │ warn │
│ vulnerabilities │ package │ CRITICAL Vulnerability found in os package type (dpkg) - zlib1g (CVE-2022-37434 - https://security-tracker.debian.org/tracker/CVE-2022-37434) │ stop │
└─────────────────┴─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘
For example - referencing by image ID:
# anchorectl image check dd8bae8d259fed93eb54b3bca0adeb647fc07f6ef16745c8ed4144ada4d51a95 --detail --tag docker.io/debian:latest
✔ Evaluated against policy [failed] dd8bae8d259fed93eb54b3bca0adeb647fc07f6ef16745c8ed4144ada4d51a95
Tag: docker.io/debian:latest
Digest: sha256:0fcb5a38077422c4e70c5c43be21831193ff4559d143e27d8d5721e7a814bdcc
Policy ID: 2c53a13c-1765-11e8-82ef-23527761d060
Last Evaluation: 2023-10-25T20:45:20Z
Evaluation: fail
Final Action: stop
Reason: denylisted
Policy Evaluation Details:
┌─────────────────┬─────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐
│ GATE │ TRIGGER │ DESCRIPTION │ STATUS │
├─────────────────┼─────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤
│ dockerfile │ instruction │ Dockerfile directive 'HEALTHCHECK' not found, matching condition 'not_exists' check │ warn │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in os package type (dpkg) - libgnutls30 (CVE-2011-3389 - https://security-tracker.debian.org/tracker/CVE-2011-3389) │ warn │
│ vulnerabilities │ package │ CRITICAL Vulnerability found in os package type (dpkg) - zlib1g (CVE-2022-37434 - https://security-tracker.debian.org/tracker/CVE-2022-37434) │ stop │
└─────────────────┴─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘
3.3 - Policy Gate: dockerfile
Introduction
This article reviews the “dockerfile” gate and its triggers. The dockerfile gate allows users to perform checks on the content of the dockerfile or docker history for an image and make policy actions based on the construction of an image, not just its content. This is particularly useful for enforcing best practices or metadata inclusion (e.g. labels) on images.
Anchore is either given a dockerfile or infers one from the docker image layer history. There are implications to what data is available and what it means depending on these differing sources, so first, we’ll cover the input data for the gate and how it impacts the triggers and parameters used.
The “dockerfile”
The data that this gate operates on can come from two different sources:
- The actual dockerfile used to build an image, as provided by the user at the time of running
anchorectl image add <img ref> --dockerfile <filename>
or the corresponding API call to: POST /images?dockerfile= - The history from layers as encoded in the image itself (see
docker history <img>
for this output)
All images have data from history available, but data from the actual dockerfile is only available when a user provides it. This also means that any images analyzed by the tag watcher functionality will not have an actual dockerfile.
The FROM line
In the actual dockerfile, the FROM instruction is preserved and available as used to build the image, however in the history data, the FROM line will always be the very first FROM instruction used to build the image and all of its dependent based image. Thus, for most images, the value in the history will be omitted and Anchore will automatically infer a FROM scratch line, which is logically inserted for this gate if the dockerfile/history does not contain an explicit FROM entry.
For example, using the docker.io/jenkins/jenkins image:
IMAGE CREATED CREATED BY SIZE COMMENT
sha256:3b9c9666a66e53473c05a3c69eb2cb888a8268f76935eecc7530653cddc28981 11 hours ago /bin/sh -c #(nop) COPY file:3a15c25533fd87983edc33758f62af7b543ccc3ce9dd570e473eb0702f5f298e in /usr/local/bin/install-plugins.sh 8.79kB
<missing> 11 hours ago /bin/sh -c #(nop) COPY file:f97999fac8a63cf8b635a54ea84a2bc95ae3da4d81ab55267c92b28b502d8812 in /usr/local/bin/plugins.sh 3.96kB
<missing> 11 hours ago /bin/sh -c #(nop) ENTRYPOINT ["/sbin/tini" "--" "/usr/local/bin/jenkins.sh"] 0B
<missing> 11 hours ago /bin/sh -c #(nop) COPY file:dc942ca949bb159f81bbc954773b3491e433d2d3e3ef90bac80ecf48a313c9c9 in /bin/tini 529B
<missing> 11 hours ago /bin/sh -c #(nop) COPY file:a8f986413b77bf4d88562b9d3a0dce98ab6e75403192aa4d4153fb41f450843d in /usr/local/bin/jenkins.sh 1.45kB
<missing> 11 hours ago /bin/sh -c #(nop) COPY file:55594d9d2aed007553a6743a43039b1a48b30527f8fb991ad93e1fd5b1298f60 in /usr/local/bin/jenkins-support 6.12kB
<missing> 11 hours ago /bin/sh -c #(nop) USER jenkins 0B
<missing> 11 hours ago /bin/sh -c #(nop) ENV COPY_REFERENCE_FILE_LOG=/var/jenkins_home/copy_reference_file.log 0B
<missing> 11 hours ago /bin/sh -c #(nop) EXPOSE 50000 0B
<missing> 11 hours ago /bin/sh -c #(nop) EXPOSE 8080 0B
<missing> 11 hours ago |9 JENKINS_SHA=e026221efcec9528498019b6c1581cca70fe9c3f6b10303777d85c6699bca0e4 JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/2.161/jenkins-war-2.161.war TINI_VERSION=v0.16.1 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref 328B
<missing> 11 hours ago /bin/sh -c #(nop) ENV JENKINS_INCREMENTALS_REPO_MIRROR=https://repo.jenkins-ci.org/incrementals 0B
<missing> 11 hours ago /bin/sh -c #(nop) ENV JENKINS_UC_EXPERIMENTAL=https://updates.jenkins.io/experimental 0B
<missing> 11 hours ago /bin/sh -c #(nop) ENV JENKINS_UC=https://updates.jenkins.io 0B
<missing> 11 hours ago |9 JENKINS_SHA=e026221efcec9528498019b6c1581cca70fe9c3f6b10303777d85c6699bca0e4 JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/2.161/jenkins-war-2.161.war TINI_VERSION=v0.16.1 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war && echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha256sum -c - 76MB
<missing> 11 hours ago /bin/sh -c #(nop) ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/2.161/jenkins-war-2.161.war 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG JENKINS_SHA=5bb075b81a3929ceada4e960049e37df5f15a1e3cfc9dc24d749858e70b48919 0B
<missing> 11 hours ago /bin/sh -c #(nop) ENV JENKINS_VERSION=2.161 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG JENKINS_VERSION 0B
<missing> 11 hours ago /bin/sh -c #(nop) COPY file:c84b91c835048a52bb864c1f4662607c56befe3c4b1520b0ea94633103a4554f in /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy 328B
<missing> 11 hours ago |7 TINI_VERSION=v0.16.1 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture) -o /sbin/tini && curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture).asc -o /sbin/tini.asc && gpg --no-tty --import ${JENKINS_HOME}/tini_pub.gpg && gpg --verify /sbin/tini.asc && rm -rf /sbin/tini.asc /root/.gnupg && chmod +x /sbin/tini 866kB
<missing> 11 hours ago /bin/sh -c #(nop) COPY file:653491cb486e752a4c2b4b407a46ec75646a54eabb597634b25c7c2b82a31424 in /var/jenkins_home/tini_pub.gpg 7.15kB
<missing> 11 hours ago /bin/sh -c #(nop) ARG TINI_VERSION=v0.16.1 0B
<missing> 11 hours ago |6 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c mkdir -p /usr/share/jenkins/ref/init.groovy.d 0B
<missing> 11 hours ago /bin/sh -c #(nop) VOLUME [/var/jenkins_home] 0B
<missing> 11 hours ago |6 agent_port=50000 gid=1000 group=jenkins http_port=8080 uid=1000 user=jenkins /bin/sh -c mkdir -p $JENKINS_HOME && chown ${uid}:${gid} $JENKINS_HOME && groupadd -g ${gid} ${group} && useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user} 328kB
<missing> 11 hours ago /bin/sh -c #(nop) ENV JENKINS_SLAVE_AGENT_PORT=50000 0B
<missing> 11 hours ago /bin/sh -c #(nop) ENV JENKINS_HOME=/var/jenkins_home 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG JENKINS_HOME=/var/jenkins_home 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG agent_port=50000 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG http_port=8080 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG gid=1000 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG uid=1000 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG group=jenkins 0B
<missing> 11 hours ago /bin/sh -c #(nop) ARG user=jenkins 0B
<missing> 11 hours ago /bin/sh -c apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/* 0B
<missing> 3 weeks ago /bin/sh -c set -ex; if [ ! -d /usr/share/man/man1 ]; then mkdir -p /usr/share/man/man1; fi; apt-get update; apt-get install -y --no-install-recommends openjdk-8-jdk="$JAVA_DEBIAN_VERSION" ; rm -rf /var/lib/apt/lists/*; [ "$(readlink -f "$JAVA_HOME")" = "$(docker-java-home)" ]; update-alternatives --get-selections | awk -v home="$(readlink -f "$JAVA_HOME")" 'index($3, home) == 1 { $2 = "manual"; print | "update-alternatives --set-selections" }'; update-alternatives --query java | grep -q 'Status: manual' 348MB
<missing> 3 weeks ago /bin/sh -c #(nop) ENV JAVA_DEBIAN_VERSION=8u181-b13-2~deb9u1 0B
<missing> 3 weeks ago /bin/sh -c #(nop) ENV JAVA_VERSION=8u181 0B
<missing> 3 weeks ago /bin/sh -c #(nop) ENV JAVA_HOME=/docker-java-home 0B
<missing> 3 weeks ago /bin/sh -c ln -svT "/usr/lib/jvm/java-8-openjdk-$(dpkg --print-architecture)" /docker-java-home 33B
<missing> 3 weeks ago /bin/sh -c { echo '#!/bin/sh'; echo 'set -e'; echo; echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; } > /usr/local/bin/docker-java-home && chmod +x /usr/local/bin/docker-java-home 87B
<missing> 3 weeks ago /bin/sh -c #(nop) ENV LANG=C.UTF-8 0B
<missing> 3 weeks ago /bin/sh -c apt-get update && apt-get install -y --no-install-recommends bzip2 unzip xz-utils && rm -rf /var/lib/apt/lists/* 2.21MB
<missing> 3 weeks ago /bin/sh -c apt-get update && apt-get install -y --no-install-recommends bzr git mercurial openssh-client subversion procps && rm -rf /var/lib/apt/lists/* 142MB
<missing> 3 weeks ago /bin/sh -c set -ex; if ! command -v gpg > /dev/null; then apt-get update; apt-get install -y --no-install-recommends gnupg dirmngr ; rm -rf /var/lib/apt/lists/*; fi 7.81MB
<missing> 3 weeks ago /bin/sh -c apt-get update && apt-get install -y --no-install-recommends ca-certificates curl netbase wget && rm -rf /var/lib/apt/lists/* 23.2MB
<missing> 3 weeks ago /bin/sh -c #(nop) CMD ["bash"] 0B
<missing> 3 weeks ago /bin/sh -c #(nop) ADD file:da71baf0d22cb2ede91c5e3ff959607e47459a9d7bda220a62a3da362b0e59ea in / 101MB
Where the actual dockerfile for that image is:
FROM openjdk:8-jdk-stretch
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
ARG JENKINS_HOME=/var/jenkins_home
ENV JENKINS_HOME $JENKINS_HOME
ENV JENKINS_SLAVE_AGENT_PORT ${agent_port}
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN mkdir -p $JENKINS_HOME \
&& chown ${uid}:${gid} $JENKINS_HOME \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME $JENKINS_HOME
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Use tini as subreaper in Docker container to adopt zombie processes
ARG TINI_VERSION=v0.16.1
COPY tini_pub.gpg ${JENKINS_HOME}/tini_pub.gpg
RUN curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture) -o /sbin/tini \
&& curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture).asc -o /sbin/tini.asc \
&& gpg --no-tty --import ${JENKINS_HOME}/tini_pub.gpg \
&& gpg --verify /sbin/tini.asc \
&& rm -rf /sbin/tini.asc /root/.gnupg \
&& chmod +x /sbin/tini
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.121.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=5bb075b81a3929ceada4e960049e37df5f15a1e3cfc9dc24d749858e70b48919
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
# could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha256sum -c -
ENV JENKINS_UC https://updates.jenkins.io
ENV JENKINS_UC_EXPERIMENTAL=https://updates.jenkins.io/experimental
ENV JENKINS_INCREMENTALS_REPO_MIRROR=https://repo.jenkins-ci.org/incrementals
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
USER ${user}
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
COPY tini-shim.sh /bin/tini
ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/jenkins.sh"]
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
COPY install-plugins.sh /usr/local/bin/install-plugins.sh
Anchore will detect the history/dockerfile as this, if not explicitly provided (note order is reversed from docker history output, so it reads in same order as actual dockerfile):
[
{
"Size" : 45323792,
"Tags" : [],
"Comment" : "",
"Id" : "sha256:cd8eada9c7bb496eb685fc6d2198c33db7cb05daf0fde42e4cf5bf0127cbdf38",
"Created" : "2018-12-28T23:29:37.981962131Z",
"CreatedBy" : "/bin/sh -c #(nop) ADD file:da71baf0d22cb2ede91c5e3ff959607e47459a9d7bda220a62a3da362b0e59ea in / "
},
{
"Size" : 0,
"Tags" : [],
"Comment" : "",
"Id" : "<missing>",
"Created" : "2018-12-28T23:29:38.226681736Z",
"CreatedBy" : "/bin/sh -c #(nop) CMD [\"bash\"]"
},
{
"Size" : 10780911,
"Comment" : "",
"Tags" : [],
"CreatedBy" : "/bin/sh -c apt-get update && apt-get install -y --no-install-recommends \t\tca-certificates \t\tcurl \t\tnetbase \t\twget \t&& rm -rf /var/lib/apt/lists/*",
"Created" : "2018-12-29T00:04:28.920875483Z",
"Id" : "sha256:c2677faec825930a8844845f55454ee0495ceb5bea9fc904d5b3125de863dc1d"
},
{
"Comment" : "",
"Tags" : [],
"Size" : 4340024,
"CreatedBy" : "/bin/sh -c set -ex; \tif ! command -v gpg > /dev/null; then \t\tapt-get update; \t\tapt-get install -y --no-install-recommends \t\t\tgnupg \t\t\tdirmngr \t\t; \t\trm -rf /var/lib/apt/lists/*; \tfi",
"Created" : "2018-12-29T00:04:34.642152001Z",
"Id" : "sha256:fcce419a96b1219a265bf7a933d66b585a6f8d73448533f3833c73ad49fb5e88"
},
{
"Size" : 50062697,
"Tags" : [],
"Comment" : "",
"Id" : "sha256:045b51e26e750443c84216071a1367a7aae0b76245800629dc04934628b4b1ea",
"CreatedBy" : "/bin/sh -c apt-get update && apt-get install -y --no-install-recommends \t\tbzr \t\tgit \t\tmercurial \t\topenssh-client \t\tsubversion \t\t\t\tprocps \t&& rm -rf /var/lib/apt/lists/*",
"Created" : "2018-12-29T00:04:59.676112605Z"
},
... <truncated for brevity> ...
{
"Tags" : [],
"Comment" : "",
"Size" : 0,
"Id" : "<missing>",
"CreatedBy" : "/bin/sh -c #(nop) ENTRYPOINT [\"/sbin/tini\" \"--\" \"/usr/local/bin/jenkins.sh\"]",
"Created" : "2019-01-21T08:56:30.737221895Z"
},
{
"Size" : 1549,
"Tags" : [],
"Comment" : "",
"Id" : "sha256:283cd3aba8691a3b9d22d923de66243b105758e74de7d9469fe55a6a58aeee30",
"Created" : "2019-01-21T08:56:32.015667468Z",
"CreatedBy" : "/bin/sh -c #(nop) COPY file:f97999fac8a63cf8b635a54ea84a2bc95ae3da4d81ab55267c92b28b502d8812 in /usr/local/bin/plugins.sh "
},
{
"Comment" : "",
"Tags" : [],
"Size" : 3079,
"Created" : "2019-01-21T08:56:33.158854485Z",
"CreatedBy" : "/bin/sh -c #(nop) COPY file:3a15c25533fd87983edc33758f62af7b543ccc3ce9dd570e473eb0702f5f298e in /usr/local/bin/install-plugins.sh ",
"Id" : "sha256:b0ce8ab5a5a7da5d762f25af970f4423b98437a8318cb9852c3f21354cbf914f"
}
]
NOTE: Anchore processes the leading /bin/sh commands, so you do not have to include those in any trigger param config if using the docker history output.
The actual_dockerfile_only Parameter
The actual vs history impacts the semantics of the dockerfile gate’s triggers. To allow explicit control of the differences, most triggers in this gate includes a parameter: actual_dockerfile_only that if set to true or false will ensure the trigger check is only done on the source of data specified. If actual_dockerfile_only = true, then the trigger will evaluate only if an actual dockerfile is available for the image and will skip evaluation if not. If actual_dockerfile_only is false or omitted, then the trigger will run on the actual dockerfile if available, or the history data if the dockerfile was not provided.
Differences in data between Docker History and actual Dockerfile
With Actual Dockerfile:
- FROM line is preserved, so the parent tag of the image is easily available
- Instruction checks are all against instructions created during the build for that exact image, not any parent images
- When the actual_dockerfile_only parameter is set to true, all instructions from the parent image are ignored in policy processing. This may have some unexpected consequences depending on how your images are structured and layered (e.g. golden base images that establish common patterns of volumes, labels, healthchecks)
- COPY/ADD instructions will maintain the actual values used
- Multistage-builds in that specific dockerfile will be visible with multiple FROM lines in the output
With Docker History data, when no dockerfile is provided:
- FROM line is not accurate, and will nearly always default to ‘FROM scratch’
- Instructions are processed from all layers in the image
- COPY and ADD instructions are transformed into SHAs rather than the actual file path/name used at build-time
- Multi-stage builds are not tracked with multiple FROM lines, only the copy operations between the phases
Trigger: instruction
This trigger evaluates instructions found in the “dockerfile”
Supported Directives/Instructions:
Parameters
actual_dockerfile_only (optional): See above
instruction: The dockerfile instruction to check against. One of:
- ADD
- ARG
- COPY
- CMD
- ENTRYPOINT
- ENV
- EXPOSE
- FROM
- HEALTHCHECK
- LABEL
- MAINTAINER
- ONBUILD
- USER
- RUN
- SHELL
- STOPSIGNAL
- VOLUME
- WORKDIR
check: The comparison/evaluation to perform. One of: =, != , exists, not_exists, like, not_like, in, not_in.
value (optional): A string value to compare against, if applicable
Examples
- Ensure an image has a HEALTHCHECK defined in the image (warn if not found):
{
"gate": "dockerfile",
"trigger": "instruction",
"action": "warn",
"parameters": [
{
"name": "instruction",
"value": "HEALTHCHECK"
},
{
"name": "check",
"value": "not_exists"
}
]
}
- Check for AWS environment variables set:
{
"gate": "dockerfile",
"trigger": "instruction",
"action": "stop",
"parameters": [
{
"name": "instruction",
"value": "ENV"
},
{
"name": "check",
"value": "like"
},
{
"name": "value",
"value": "AWS_.*KEY"
}
]
}
Trigger: packages_added
This trigger warns if a package was added to the SBOM.
Parameters
Optional parameter: “package_type”
Example
Raise a warning if packages were added.
{
"action": "WARN",
"gate": "tag_drift",
"trigger": "packages_added",
"params": [],
"id": "1ba3461f-b9db-4a6c-ac88-329d38e08df5"
}
Trigger: packages_removed
This trigger warns if a package was deleted from the SBOM.
Parameters
Optional parameter: “package_type”
Example
Raise a warning if packages were deleted.
{
"action": "WARN",
"gate": "tag_drift",
"trigger": "packages_removed",
"params": [],
"id": "de05d77b-1f93-4df4-a65d-57d9042b1f3a"
}
Trigger: packages_modified
This trigger warns if a package was changed in the SBOM.
Parameters
Optional parameter: “package_type”
Example
Raise a warning if packages were changed.
{
"action": "WARN",
"gate": "tag_drift",
"trigger": "packages_modified",
"params": [],
"id": "1168b0ac-df6c-4715-8077-2cb3e016cf63"
}
Trigger: effective_user
This trigger processes all USER
directives in the dockerfile or history to determine which user will be used to run the container by default (assuming no user is set explicitly at runtime). The detected value is then subject to a allowlist or denylist filter depending on the configured parameters. Typically, this is used for denylisting the root user.
Parameters
actual_dockerfile_only (optional): See above
users: A string with a comma delimited list of username to check for
type: The type of check to perform. One of: ‘denylist’ or ‘allowlist’. This determines how the value of the ‘users’ parameter is interpreted.
Examples
- Denylist root user
{
"gate": "dockerfile",
"trigger": "effective_user",
"action": "stop",
"parameters": [
{
"name": "users",
"value": "root"
},
{
"name": "type",
"value": "denylist"
}
]
}
- Denylist root user but only if set in actual dockerfile, not inherited from parent image
{
"gate": "dockerfile",
"trigger": "effective_user",
"action": "stop",
"parameters": [
{
"name": "users",
"value": "root"
},
{
"name": "type",
"value": "denylist"
},
{
"name": "actual_dockerfile_only",
"value": "true"
}
]
}
- Warn if the user is not either “nginx” or “jenkins”
{
"gate": "dockerfile",
"trigger": "effective_user",
"action": "warn",
"parameters": [
{
"name": "users",
"value": "nginx,jenkins"
},
{
"name": "type",
"value": "allowlist"
}
]
}
Trigger: exposed_ports
This trigger processes the set of EXPOSE
directives in the dockerfile/history to determine the set of ports that are defined to be exposed (since it can span multiple directives). It performs checks on that set to denylist/allowlist them based on parameter settings.
Parameters
actual_dockerfile_only (optional): See above
ports: String of comma delimited port numbers to be checked
type: The type of check to perform. One of: ‘denylist’ or ‘allowlist’. This determines how the value of the ‘users’ parameter is interpreted
Examples
- Allow only ports 80 and 443. Trigger will fire on any port defined to be exposed that is not 80 or 443
{
"gate": "dockerfile",
"trigger": "exposed_ports",
"action": "warn",
"parameters": [
{
"name": "ports",
"value": "80,443"
},
{
"name": "type",
"value": "allowlist"
}
]
}
- Denylist ports 21 (ftp), 22 (ssh), and 53 (dns) . Trigger will fire a match on ports 21, 22, 53 if found in EXPOSE directives
{
"gate": "dockerfile",
"trigger": "exposed_ports",
"action": "warn",
"parameters": [
{
"name": "ports",
"value": "21,22,53"
},
{
"name": "type",
"value": "denylist"
}
]
}
Trigger: no_dockerfile_provided
This trigger allows checks on the way the image was added, firing if the dockerfile was not explicitly provided at analysis time. This is useful in identifying and qualifying other trigger matches.
Parameters
None
Examples
- Raise a warning if no dockerfile was provided at analysis time
{
"gate": "dockerfile",
"trigger": "no_dockerfile_provided",
"action": "warn",
"parameters": []
}
4 - Working with Registries
Using the API or AnchoreCTL, Anchore Enterprise can be instructed to download an image from a public or private container registry.
Anchore Enterprise will attempt to download images from any registry without requiring further configuration. However if
your registry requires authentication then the registry and corresponding credentials will need to be defined.
Anchore Enterprise can analyze images from any Docker V2 compatible registry.
Jump to the registry configuring guide for your registry:
4.1 - Configuring Registries
Anchore Enterprise will attempt to download images from any registry without requiring further configuration.
However if your registry requires authentication then the registry and corresponding credentials will need to be defined.
Listing Registries
Running the following command lists the defined registries.
# anchorectl registry list
✔ Fetched registries
┌───────────────────┬───────────────┬───────────────┬─────────────────┬──────────────────────┬─────────────┬───────────────────┐
│ REGISTRY NAME │ REGISTRY TYPE │ REGISTRY USER │ REGISTRY VERIFY │ CREATED AT │ LAST UPATED │ REGISTRY │
├───────────────────┼───────────────┼───────────────┼─────────────────┼──────────────────────┼─────────────┼───────────────────┤
│ docker.io │ docker_v2 │ anchore │ true │ 2022-08-24T21:37:08Z │ │ docker.io │
│ quay.io │ docker_v2 │ anchore │ true │ 2022-08-25T20:55:33Z │ │ quay.io │
│ 192.168.1.89:5000 │ docker_v2 │ johndoe │ true │ 2022-08-25T20:56:01Z │ │ 192.168.1.89:5000 │
└───────────────────┴───────────────┴───────────────┴─────────────────┴──────────────────────┴─────────────┴───────────────────┘
Here we can see that 3 registries have been defined. If no registry was defined Anchore Enterprise would attempt to
pull images without authentication but a registry is defined then all pulls for images from that registry will use the specified username and password.
Adding a Registry
Registries can be added using the following syntax.
# ANCHORECTL_REGISTRY_PASSWORD=<password> anchorectl registry add <registry> --username <username>
The REGISTRY parameter should include the fully qualified hostname and port number of the registry. For example: registry.anchore.com:5000
Anchore Enterprise will only pull images from a TLS/SSL enabled registry. If the registry is protected with a self signed certificate or a certificated
signed by an unknown certificate authority then the --secure-conection=<true|false>
parameter can be passed which instructs Anchore Enterprise not to validate the certificate.
Most Docker V2 compatible registries require username and password for authentication. Amazon ECR, Google GCR and Microsoft Azure include support
for their own native credentialing. See Working with AWS ECR Registry Credentials,
Working with Google GCR Registry Credentials and
Working with Azure Registry Credentials for more details.
Getting Registry Details
The registry get command allows the user to retrieve details about a specific registry.
For example:
# anchorectl registry get registry.example.com
✔ Fetched registry
┌──────────────────────┬───────────────┬───────────────┬─────────────────┬──────────────────────┬─────────────┬──────────────────────┐
│ REGISTRY NAME │ REGISTRY TYPE │ REGISTRY USER │ REGISTRY VERIFY │ CREATED AT │ LAST UPATED │ REGISTRY │
├──────────────────────┼───────────────┼───────────────┼─────────────────┼──────────────────────┼─────────────┼──────────────────────┤
│ registry.example.com │ docker_v2 │ johndoe │ false │ 2022-08-25T20:58:33Z │ │ registry.example.com │
└──────────────────────┴───────────────┴───────────────┴─────────────────┴──────────────────────┴─────────────┴──────────────────────┘
In this example we can see that the registry.example.com registry was added to Anchore Enterprise on the 25th August at 20:58 UTC.
The password for the registry cannot be retrieved through the API or AnchoreCTL.
Updating Registry Details
Once a registry had been defined the parameters can be updated using the update command. This allows a registry’s username, password and secure-connection (validate TLS) parameters to be updated using the same syntax as is used in the ‘add’ operation.
# ANCHORECTL_REGISTRY_PASSWORD=<newpassword> anchorectl registry update registry.example.com --username <newusername> --validate=<true|false> --secure-connection=<true|false>
Deleting Registries
A Registry can be deleted from Anchore’s configuration using the del
command.
For example to delete the configuration for registry.example.com the following command should be issued:
# anchorectl registry delete registry.example.com
✔ Deleted registry
No results
Note: Deleting a registry record does not delete the records of images/tags associated with that registry.
Advanced
Anchore Enterprise attempts to perform a credential validation upon registry addition, but there are cases where a credential can be valid but the validation routine can fail (in particular, credential
validation methods are changing for public registries over time). If you are unable to add a registry but believe that the credential you are providing is valid, or you wish to add a
credential to anchore before it is in place in the registry, you can bypass the registry credential validation process using the --validate=false
option to the registry add
or registry update
command.
4.2 - Working with Amazon ECR Registry Credentials
Amazon AWS typically uses keys instead of traditional usernames & passwords. These keys consist of an access key ID and a secret access key. While it is possible to use the aws ecr get-login command to create an access token, this will expire after 12 hours so it is not appropriate for use with Anchore Enterprise, otherwise a user would need to update their registry credentials regularly. So when adding an Amazon ECR registry to Anchore Enterprise you should pass the aws_access_key_id and aws_secret_access_key.
# ANCHORECTL_REGISTRY_PASSWORD=<MY_AWS_SECRET_ACCESS_KEY> anchorectl registry add 1234567890.dkr.ecr.us-east-1.amazonaws.com --username <MY_AWS_ACCESS_KEY_ID> --type awsecr
The registry-type parameter instructs Anchore Enterprise to handle these credentials as AWS credentials rather than traditional usernames and passwords. Currently Anchore Enterprise supports two types of registry authentication standard username and password for most Docker V2 registries and Amazon ECR. In this example we specified the registry type on the command line however if this parameter is omitted then AnchoreCTL will attempt to guess the registry type from the URL which uses a standard format.
Anchore Enterprise will use the AWS access key and secret access keys to generate authentication tokens to access the Amazon ECR registry, Anchore Enterprise will manage regeneration of these tokens which typically expire after 12 hours.
In addition to supporting AWS access key credentials Anchore also supports the use of IAM roles for authenticating with Amazon ECR if Anchore Enterprise is run on an EC2 instance.
In this case you can configure Anchore Enterprise to inherit the IAM role from the EC2 instance hosting the system.
When launching the EC2 instance that will run Anchore Enterprise you need to specify a role that includes the AmazonEC2ContainerRegistryReadOnly policy.
While this is best performed using a CloudFormation template, you can manually configure from the launch instance wizard.
Step 1: Select Create new IAM role.
Step 2: Under type of trusted entity select EC2.
Ensure that the AmazonEC2ContainerRegistryReadOnly policy is selected.
Step 3: Attach Permissions to the Role.
Step 4: Name the role.
Give a name to the role and add this role to the Instance you are launching.
On the running EC2 instance you can manually verify that the instance has inherited the correct role by running the following command:
# curl http://169.254.169.254/latest/meta-data/iam/info
{
"Code" : "Success",
"LastUpdated" : "2018-01-1218:45:12Z",
"InstanceProfileArn" : "arn:aws:iam::123456789012:instance-profile/ECR-ReadOnly",
"InstanceProfileId" : "ABCDEFGHIJKLMNOP”
}
Step 5: Enable IAM Authentication in Anchore Enterprise.
By default the support for inheriting the IAM role is disabled.
To enable IAM based authentication add the following entry to the top of Anchore Enterprise config.yaml file:
allow_awsecr_iam_auto: True
Step 6: Add the Registry using the AWSAUTO user.
When IAM support is enabled instead of passing the access key and secret access key use “awsauto” for both username and password. This will instruct Anchore Enterprise to inherit the role from the underlying EC2 instance.
# ANCHORECTL_REGISTRY_PASSWORD=awsauto anchorectl registry add 1234567890.dkr.ecr.us-east-1.amazonaws.com --username awsauto --type awsecr
4.3 - Working with Azure Registry Credentials
To use an Azure Registry, you can configure Anchore to use either the admin credential(s) or a service principal. Refer to Azure documentation for differences and how to setup each. When you’ve chosen a credential type, use the following to determine which registry command options correspond to each value for your credential type
Admin Account
- Registry: The login server (Ex. myregistry1.azurecr.io)
- Username: The username in the ‘az acr credential show –name ’ output
- Password: The password or password2 value from the ‘az acr credential show’ command result
Service Principal
- Registry: The login server (Ex. myregistry1.azurecr.io)
- Username: The service principal app id
- Password: The service principal password
Note: You can follow Microsoft Documentation for creating a Service Principal.
To add an azure registry credential, invoke anchorectl
as follows:
ANCHORECTL_REGISTRY_PASSWORD=<password> anchorectl registry add <registry> --username <username> <Password>
Once a registry has been added, any image that is added (e.g. anchorectl image add <Registry>/some/repo:sometag
) will use the provided credential to download/inspect and analyze the image.
4.4 - Working with Google Container Registry (GCR) Credentials
When working with Google Container Registry it is recommended that you use JSON keys rather than the short lived access tokens.
JSON key files are long-lived and are tightly scoped to individual projects and resources. You can read more about JSON credentials in Google’s documentation at the following URL: Google Container Registry advanced authentication
Once a JSON key file has been created with permissions to read from the container registry then the registry should be added with the username _json_key and the password should be the contents of the key file.
In the following example a file named key.json in the current directory contains the JSON key with readonly access to the my-repo repository within the my-project Google Cloud project.
# ANCHORECTL_REGISTRY_PASSWORD="$(cat key.json)" anchorectl registry add us.gcr.io --username _json_key
5 - Working with Subscriptions
Introduction
Anchore Enterprise supports 7 types of subscriptions.
- Tag Update
- Policy Update
- Vulnerability Update
- Analysis Update
- Alerts
- Repository Update
- Runtime Inventory
For detail information about Subscriptions please see Subscriptions
Managing Subscriptions
Subscriptions can be managed using AnchoreCTL.
Listing Subscriptions
Running the subscription list
command will output a table showing the type and status of each subscription.
# anchorectl subscription list | more
✔ Fetched subscriptions
┌──────────────────────────────────────────────────────────────────────┬─────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────────────────────────────────────────────────┼─────────────────┼────────┤
│ docker.io/alpine:latest │ policy_eval │ false │
│ docker.io/alpine:3.12.4 │ policy_eval │ false │
│ docker.io/alpine:latest │ vuln_update │ false │
│ docker.io/redis:latest │ policy_eval │ false │
│ docker.io/centos:8 │ policy_eval │ false │
│ docker.io/alpine:3.8.4 │ policy_eval │ false │
│ docker.io/centos:8 │ vuln_update │ false │
...
└──────────────────────────────────────────────────────────────────────┴─────────────────┴────────┘
Note: Tag Subscriptions are tied to registry/repo:tag and not to image IDs.
Activating Subscriptions
The subscription activate
command is used to enable a subscription type for a given image. The command takes the following form:
anchorectl subscription activate SUBSCRIPTION_KEY SUBSCRIPTION_TYPE
SUBSCRIPTION_TYPE should be either:
- tag_update
- vuln_update
- policy_eval
- analysis_update
SUBSCRIPTION_KEY should be the name of the subscribed tag. eg. docker.io/ubuntu:latest
For example:
# anchorectl subscription activate docker.io/ubuntu:latest tag_update
✔ Activate subscription
Key: docker.io/ubuntu:latest
Type: tag_update
Id: 04f0e6d230d3e297acdc91ed9944278d
Active: true
and to de-activate:
# anchorectl subscription deactivate docker.io/ubuntu:latest tag_update
✔ Deactivate subscription
Key: docker.io/ubuntu:latest
Type: tag_update
Id: 04f0e6d230d3e297acdc91ed9944278d
Active: false
Tag Update Subscription
Any new tag added to Anchore Enterprise by AnchoreCTL will, by default, enable the Tag Update Subscription.
If you do to need this functionality, you can use the flag --no-auto-subscribe
or set the environment variable ANCHORECTL_IMAGE_NO_AUTO_SUBSCRIBE
when adding new tags.
# ./anchorectl image add docker.io/ubuntu:latest --no-auto-subscribe
Runtime Inventory Subscription
AnchoreCTL provides commands to help navigate the runtime_inventory
Subscription. The subscription will monitor a specify runtime inventory context and add its images to the system for analysis.
Listing Inventory Watchers
# ./anchorectl inventory watch list
✔ Fetched watches
┌──────────────────────────┬───────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────┼───────────────────┼────────┤
│ cluster-one/my-namespace │ runtime_inventory │ false │
└──────────────────────────┴───────────────────┴────────┘
Activating an Inventory Watcher
Note: This command will create the subscription is one does not already exist.
# ./anchorectl inventory watch activate cluster-one/my-namespace
✔ Activate watch
┌──────────────────────────┬───────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────┼───────────────────┼────────┤
│ cluster-one/my-namespace │ runtime_inventory │ true │
└──────────────────────────┴───────────────────┴────────┘
Deactivating an Inventory Watcher
# ./anchorectl inventory watch deactivate cluster-one/my-namespace
✔ Deactivate watch
┌──────────────────────────┬───────────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────────────┼───────────────────┼────────┤
│ cluster-one/my-namespace │ runtime_inventory │ false │
└──────────────────────────┴───────────────────┴────────┘
Webhook Configuration
Webhooks are configured in the Anchore Enterprise configuration file config.yaml
In the sample configuration file webhooks are disabled (commented) out.
webhooks:
webhook_user: 'user'
webhook_pass: 'pass'
ssl_verify: False
The webhooks can, optionally, pass basic credentials to the webhook endpoint, if these are not required the the webhook_user
and webhool_pass
entries can be commented out. By default TLS/SSL connections will validate the certificate provided. This can be suppressed by uncommenting the ssl_verify
option.
url: 'http://localhost:9090/general/<notification_type>/<userId>'
If configured, the general webook will receive all notifications (policy_eval, tag_update, vuln_update) for each user.In this case <notification_type> will be replaced by the appropriate type. will be replaced by the configured user which is, by default, admin. eg. http://localhost:9090/general/vuln_update/admin'
policy_eval:
url: 'http://localhost:9090/somepath/<userId>'
webhook_user: 'mehuser'
webhook_pass: 'mehpass'
Specific endpoints for each event type can be configured, for example an endpoint for policy_eval notifications. In these cases the url, username, password and SSL/TLS verification can be specified.
error_event:
url: 'http://localhost:9090/error_event/'
This webook, if configured, will send a webhook if any FATAL system events are logged.
6 - Scanning Repositories
Introduction
Individual images can be added to Anchore Enterprise using the image add
command. This may be performed by a CI/CD plugin such as Jenkins or manually by a user with AnchoreCTL or API.
Anchore Enterprise can also be configured to scan repositories and automatically add any tags found in the repository. Once added, Anchore Enterprise will poll the registry to look for changes at a user configurable interval.
This interval is specified in the Anchore Enterprise configuration file: config.yaml within the services -> Catalog configuration stanza.
Example Configuration
cycle_timers:
image_watcher: 3600
repo_watcher: 60
In this example the repo is polled for updates every minute (60 seconds).
For more details on the Repository Subscription, please see Subscriptions
Adding Repositories
The repo add
command instructs Anchore Enterprise to add the specified repository watch list.
# anchorectl repo add docker.io/alpine
✔ Added repo
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
└──────────────────┴─────────────┴────────┘
By default Anchore Enterprise will automatically add the discovered tags to the list of subscribed tags (see Working with Subscriptions this behavior can be overridden by passing the --auto-subscribe=<true|false>
option.
Listing Repositories
The repo list
command will show the repositories monitored by Anchore Enterprise.
# anchorectl repo list
✔ Fetched repos
┌─────────────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├─────────────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
│ docker.io/elasticsearch │ repo_update │ true │
└─────────────────────────┴─────────────┴────────┘
Deleting Repositories
The del
option can be used to instruct Anchore Enterprise to remove the repository from the watch list. Once the repository record has been deleted no further changes to the repository will be detected by Anchore Enterprise.
Note: No existing image data will be removed from Anchore Enterprise.
# anchorectl repo del docker.io/alpine
✔ Deleted repo
No results
Unwatching Repositories
When a repository is added, Anchore Enterprise will monitor the repository for new and updated tags. This behavior can be disabled preventing Anchore Enterprise from monitoring the repository for changes.
In this case the repo list
command will show false in the Watched column for this registry.
# anchorectl repo unwatch docker.io/alpine
✔ Unwatch repo
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ false │
└──────────────────┴─────────────┴────────┘
Watching Repositories
The repo watch command instructs Anchore Enterprise to monitor a repository for new and updated tags. By default repositories added to Anchore Enterprise are automatically watched. This option is only required if a repository has been manually unwatched.
# anchorectl repo watch docker.io/alpine
✔ Watch repo
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
└──────────────────┴─────────────┴────────┘
As of v3.0, Anchore Enterprise can be configured to have a size limit for images being added for analysis. This feature applies to the repo watcher. Images that exceed the max configured size in the repo being watched will not be added and a message will be logged in the catalog service. This feature is disabled by default so see documentation for additional details on the functionality of this feature and instructions on how to configure the limit
Removing a Repository and All Images
There may be a time when you wish to stop a repository analysis when the analysis is running (e.g., accidentally watching an image with a large number of tags). There are several steps in the process which are outlined below. We will use docker.io/library/alpine
as an example.
Note: Be careful when deleting images. In this flow, Anchore deletes the image, not just the repository/tag combo. Because of this, deletes may impact more than the expected repository since an image may have tags in multiple repositories or even registries.
Check the State
Take a look at the repository list.
anchorectl repo list
✔ Fetched repos
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ true │
└──────────────────┴─────────────┴────────┘
Also look at the image list.
anchorectl image list | grep docker.io/alpine
✔ Fetched images
│ docker.io/alpine:20220328 │ sha256:c11c38f8002da63722adb5111241f5e3c2bfe4e54c0e8f0fb7b5be15c2ddca5f │ not_analyzed │ active │
│ docker.io/alpine:3.16.0 │ sha256:4ff3ca91275773af45cb4b0834e12b7eb47d1c18f770a0b151381cd227f4c253 │ not_analyzed │ active │
│ docker.io/alpine:20220316 │ sha256:57031e1a3b381fba5a09d5c338f7dbeeed2260ad5100c66b2192ab521ae27fc1 │ not_analyzed │ active │
│ docker.io/alpine:3.14.5 │ sha256:aee6c86e12b609732a30526ddfa8194e4a54dc5514c463e4c2e41f5a89a0b67a │ not_analyzed │ active │
│ docker.io/alpine:3.15.5 │ sha256:26284c09912acfc5497b462c5da8a2cd14e01b4f3ffa876596f5289dd8eab7f2 │ not_analyzed │ active │
...
...
Removing the Repository from the Watched List
Unwatch docker.io/library/alpine
to prevent future automatic updates.
# anchorectl repo unwatch docker.io/alpine
✔ Unwatch repo
┌──────────────────┬─────────────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├──────────────────┼─────────────┼────────┤
│ docker.io/alpine │ repo_update │ false │
└──────────────────┴─────────────┴────────┘
Delete the Repository
Delete the repository. This may need to be done a couple times if the repository still shows in the repository list.
# anchorectl repo delete docker.io/alpine
✔ Deleted repo
No results
Forcefully Delete the Images
Delete the analysis/images. This may need to be done several times to remove all images depending on how many there are.
# for i in `anchorectl -q image list | grep docker.io/alpine | awk '{print $2}'`
> do
> anchorectl image delete ${i} --force
> done
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:c11c38f8002da63722adb5111241f5e3c2bfe4e54c0e8f0fb7b5be15c2ddca5f │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
┌─────────────────────────────────────────────────────────────────────────┬──────────┐
│ DIGEST │ STATUS │
├─────────────────────────────────────────────────────────────────────────┼──────────┤
│ sha256:4ff3ca91275773af45cb4b0834e12b7eb47d1c18f770a0b151381cd227f4c253 │ deleting │
└─────────────────────────────────────────────────────────────────────────┴──────────┘
...
...
...
Verify the Repository and All Images are Deleted
Check the repository list.
# anchorectl repo list
✔ Fetched repos
┌─────┬──────┬────────┐
│ KEY │ TYPE │ ACTIVE │
├─────┼──────┼────────┤
└─────┴──────┴────────┘
Check the image list.
# anchorectl image list | grep docker.io/alpine
✔ Fetched images
<no output>
Next Steps
7 - Feeds Overview
Anchore Enterprise uses security vulnerability and package data from a number of sources:
Feed vulnerabilities - security advisories from specific Linux Distribution vendors against Distribution specific packages.
- Alpine Linux
- CentOS
- Debian
- Oracle Linux
- Red Hat Enterprise Linux
- Red Hat Universal Base Image (UBI)
- Ubuntu
- Amazon Linux 2
- Google Distroless
Feed packages - Software Package Repositories
Feed nvd - NIST National Vulnerability Database (NVD)
The Anchore Feed Service collects vulnerability and package data from the upstream sources and normalizes this data to be published
as feeds that Anchore Enterprise can subscribe to.
Anchore Enterprise polls the feed service at a user defined interval, by default every six hours, and will download feed data
updated since the last sync.
Anchore hosts a public service on the Anchore Cloud which provides access, for free, to all public feeds.
An on-premises feed service is available for commercial customers allowing Anchore Enterprise to synchronize with a locally
deployed feed service, without any reliance on Anchore Cloud.
7.1 - Feed Configuration
Feed Synchronization Interval
The default configuration for Anchore Enterprise will download vulnerability data from Anchore’s feed service every 21,600 seconds (6hours).
For most users the only configuration option that is typically updated is the feed synchronization interval - the time interval (in seconds) at which the feed sync is run.
.....
cycle_timers:
...
feed_sync: 14400
Feed Data Settings
Feed data configuration is set in the config.yaml file used by policy engine service. The services.policy_engine.vulnerabilities.sync.data
section
of the configuration file controls the behavior of data to be synced. In addition, the data groups that can be synced depend on the services.policy_engine.vulnerabilities.provider
,
and are explained in detail in the following sections.
Feed Groups
Anchore Enterprise is configured with grype
as the services.policy_engine.vulnerabilities.provider
and grypedb
feed group enabled.
The grypedb
feed group syncs a single Grype database to the policy engine.
A Grype database contains data that spans multiple groups. Due to this encapsulation, it is not possible to enable or disable individual feed groups.
Anchore Enterprise will default to downloading the feed group from a publicly accessible URL maintained by Grype https://toolbox-data.anchore.io/grype/databases/listing.json.
The Grype database available from this endpoint does not include third-party/proprietary groups such as MSRC.
To get those groups, set url
(or override the environment variable ANCHORE_GRYPE_DB_URL
) to your local feed service.
services:
...
policy_engine:
...
vulnerabilities:
provider: grype
...
sync:
...
data:
grypedb:
enabled: true
url: ${ANCHORE_GRYPE_DB_URL}
Read Timeout
Under rare circumstances you may see syncs failing with errors to fetch data due to timeouts. This is typically due to load on the feed service, network issues, or
some other temporary condition. However, if you want to increase the timeout to increase the likelihood of success, modify the read_timeout_seconds of the feeds configuration:
feeds:
...
read_timeout_seconds: 180
Controlling Which Feeds and Groups are Synced
Note: The package and nvd data feeds are large, resulting in the initial sync taking some time to sync.
During initial feed sync, you can always query the progress and status of the feed sync using anchorectl
.
# anchorectl feed list
✔ List feed
┌─────────────────┬────────────────────┬─────────┬──────────────────────┬──────────────┐
│ FEED │ GROUP │ ENABLED │ LAST SYNC │ RECORD COUNT │
├─────────────────┼────────────────────┼─────────┼──────────────────────┼──────────────┤
│ vulnerabilities │ github:composer │ true │ 2023-08-21T16:41:04Z │ 2148 │
│ vulnerabilities │ github:gem │ true │ 2023-08-21T16:41:04Z │ 700 │
│ vulnerabilities │ github:go │ true │ 2023-08-21T16:41:04Z │ 1176 │
│ vulnerabilities │ github:java │ true │ 2023-08-21T16:41:04Z │ 3848 │
│ vulnerabilities │ github:npm │ true │ 2023-08-21T16:41:04Z │ 3450 │
│ vulnerabilities │ github:nuget │ true │ 2023-08-21T16:41:04Z │ 496 │
│ vulnerabilities │ github:python │ true │ 2023-08-21T16:41:04Z │ 1966 │
│ vulnerabilities │ github:rust │ true │ 2023-08-21T16:41:04Z │ 628 │
│ vulnerabilities │ nvd │ true │ 2023-08-21T16:41:04Z │ 223049 │
│ vulnerabilities │ alpine:3.10 │ true │ 2023-08-21T16:41:04Z │ 2321 │
│ vulnerabilities │ alpine:3.11 │ true │ 2023-08-21T16:41:04Z │ 2659 │
│ vulnerabilities │ alpine:3.12 │ true │ 2023-08-21T16:41:04Z │ 3193 │
│ vulnerabilities │ alpine:3.13 │ true │ 2023-08-21T16:41:04Z │ 3684 │
│ vulnerabilities │ alpine:3.14 │ true │ 2023-08-21T16:41:04Z │ 4265 │
│ vulnerabilities │ alpine:3.15 │ true │ 2023-08-21T16:41:04Z │ 4760 │
│ vulnerabilities │ alpine:3.16 │ true │ 2023-08-21T16:41:04Z │ 5146 │
│ vulnerabilities │ alpine:3.17 │ true │ 2023-08-21T16:41:04Z │ 5399 │
│ vulnerabilities │ alpine:3.18 │ true │ 2023-08-21T16:41:04Z │ 5566 │
│ vulnerabilities │ alpine:3.2 │ true │ 2023-08-21T16:41:04Z │ 305 │
│ vulnerabilities │ alpine:3.3 │ true │ 2023-08-21T16:41:04Z │ 470 │
│ vulnerabilities │ alpine:3.4 │ true │ 2023-08-21T16:41:04Z │ 679 │
│ vulnerabilities │ alpine:3.5 │ true │ 2023-08-21T16:41:04Z │ 902 │
│ vulnerabilities │ alpine:3.6 │ true │ 2023-08-21T16:41:04Z │ 1075 │
│ vulnerabilities │ alpine:3.7 │ true │ 2023-08-21T16:41:04Z │ 1461 │
│ vulnerabilities │ alpine:3.8 │ true │ 2023-08-21T16:41:04Z │ 1671 │
│ vulnerabilities │ alpine:3.9 │ true │ 2023-08-21T16:41:04Z │ 1955 │
│ vulnerabilities │ alpine:edge │ true │ 2023-08-21T16:41:04Z │ 5571 │
│ vulnerabilities │ amzn:2 │ true │ 2023-08-21T16:41:04Z │ 1381 │
│ vulnerabilities │ amzn:2022 │ true │ 2023-08-21T16:41:04Z │ 276 │
│ vulnerabilities │ amzn:2023 │ true │ 2023-08-21T16:41:04Z │ 300 │
│ vulnerabilities │ chainguard:rolling │ true │ 2023-08-21T16:41:04Z │ 378 │
│ vulnerabilities │ debian:10 │ true │ 2023-08-21T16:41:04Z │ 27731 │
│ vulnerabilities │ debian:11 │ true │ 2023-08-21T16:41:04Z │ 27886 │
│ vulnerabilities │ debian:12 │ true │ 2023-08-21T16:41:04Z │ 26675 │
│ vulnerabilities │ debian:13 │ true │ 2023-08-21T16:41:04Z │ 26359 │
│ vulnerabilities │ debian:7 │ true │ 2023-08-21T16:41:04Z │ 20455 │
│ vulnerabilities │ debian:8 │ true │ 2023-08-21T16:41:04Z │ 24058 │
│ vulnerabilities │ debian:9 │ true │ 2023-08-21T16:41:04Z │ 28240 │
│ vulnerabilities │ debian:unstable │ true │ 2023-08-21T16:41:04Z │ 30185 │
│ vulnerabilities │ mariner:1.0 │ true │ 2023-08-21T16:41:04Z │ 2096 │
│ vulnerabilities │ mariner:2.0 │ true │ 2023-08-21T16:41:04Z │ 1774 │
│ vulnerabilities │ ol:5 │ true │ 2023-08-21T16:41:04Z │ 1255 │
│ vulnerabilities │ ol:6 │ true │ 2023-08-21T16:41:04Z │ 1695 │
│ vulnerabilities │ ol:7 │ true │ 2023-08-21T16:41:04Z │ 2005 │
│ vulnerabilities │ ol:8 │ true │ 2023-08-21T16:41:04Z │ 1372 │
│ vulnerabilities │ ol:9 │ true │ 2023-08-21T16:41:04Z │ 359 │
│ vulnerabilities │ rhel:5 │ true │ 2023-08-21T16:41:04Z │ 6995 │
│ vulnerabilities │ rhel:6 │ true │ 2023-08-21T16:41:04Z │ 8720 │
│ vulnerabilities │ rhel:7 │ true │ 2023-08-21T16:41:04Z │ 8452 │
│ vulnerabilities │ rhel:8 │ true │ 2023-08-21T16:41:04Z │ 4828 │
│ vulnerabilities │ rhel:9 │ true │ 2023-08-21T16:41:04Z │ 1752 │
│ vulnerabilities │ sles:11 │ true │ 2023-08-21T16:41:04Z │ 594 │
│ vulnerabilities │ sles:11.1 │ true │ 2023-08-21T16:41:04Z │ 6125 │
│ vulnerabilities │ sles:11.2 │ true │ 2023-08-21T16:41:04Z │ 3291 │
│ vulnerabilities │ sles:11.3 │ true │ 2023-08-21T16:41:04Z │ 7081 │
│ vulnerabilities │ sles:11.4 │ true │ 2023-08-21T16:41:04Z │ 6583 │
│ vulnerabilities │ sles:12 │ true │ 2023-08-21T16:41:04Z │ 5948 │
│ vulnerabilities │ sles:12.1 │ true │ 2023-08-21T16:41:04Z │ 6205 │
│ vulnerabilities │ sles:12.2 │ true │ 2023-08-21T16:41:04Z │ 8306 │
│ vulnerabilities │ sles:12.3 │ true │ 2023-08-21T16:41:04Z │ 10161 │
│ vulnerabilities │ sles:12.4 │ true │ 2023-08-21T16:41:04Z │ 10121 │
│ vulnerabilities │ sles:12.5 │ true │ 2023-08-21T16:41:04Z │ 10728 │
│ vulnerabilities │ sles:15 │ true │ 2023-08-21T16:41:04Z │ 8738 │
│ vulnerabilities │ sles:15.1 │ true │ 2023-08-21T16:41:04Z │ 8852 │
│ vulnerabilities │ sles:15.2 │ true │ 2023-08-21T16:41:04Z │ 8455 │
│ vulnerabilities │ sles:15.3 │ true │ 2023-08-21T16:41:04Z │ 8753 │
│ vulnerabilities │ sles:15.4 │ true │ 2023-08-21T16:41:04Z │ 8678 │
│ vulnerabilities │ sles:15.5 │ true │ 2023-08-21T16:41:04Z │ 7753 │
│ vulnerabilities │ ubuntu:12.04 │ true │ 2023-08-21T16:41:04Z │ 14934 │
│ vulnerabilities │ ubuntu:12.10 │ true │ 2023-08-21T16:41:04Z │ 5641 │
│ vulnerabilities │ ubuntu:13.04 │ true │ 2023-08-21T16:41:04Z │ 4117 │
│ vulnerabilities │ ubuntu:14.04 │ true │ 2023-08-21T16:41:04Z │ 32822 │
│ vulnerabilities │ ubuntu:14.10 │ true │ 2023-08-21T16:41:04Z │ 4437 │
│ vulnerabilities │ ubuntu:15.04 │ true │ 2023-08-21T16:41:04Z │ 6220 │
│ vulnerabilities │ ubuntu:15.10 │ true │ 2023-08-21T16:41:04Z │ 6489 │
│ vulnerabilities │ ubuntu:16.04 │ true │ 2023-08-21T16:41:04Z │ 29968 │
│ vulnerabilities │ ubuntu:16.10 │ true │ 2023-08-21T16:41:04Z │ 8607 │
│ vulnerabilities │ ubuntu:17.04 │ true │ 2023-08-21T16:41:04Z │ 9094 │
│ vulnerabilities │ ubuntu:17.10 │ true │ 2023-08-21T16:41:04Z │ 7900 │
│ vulnerabilities │ ubuntu:18.04 │ true │ 2023-08-21T16:41:04Z │ 24446 │
│ vulnerabilities │ ubuntu:18.10 │ true │ 2023-08-21T16:41:04Z │ 8368 │
│ vulnerabilities │ ubuntu:19.04 │ true │ 2023-08-21T16:41:04Z │ 8635 │
│ vulnerabilities │ ubuntu:19.10 │ true │ 2023-08-21T16:41:04Z │ 8416 │
│ vulnerabilities │ ubuntu:20.04 │ true │ 2023-08-21T16:41:04Z │ 18500 │
│ vulnerabilities │ ubuntu:20.10 │ true │ 2023-08-21T16:41:04Z │ 9979 │
│ vulnerabilities │ ubuntu:21.04 │ true │ 2023-08-21T16:41:04Z │ 11310 │
│ vulnerabilities │ ubuntu:21.10 │ true │ 2023-08-21T16:41:04Z │ 12627 │
│ vulnerabilities │ ubuntu:22.04 │ true │ 2023-08-21T16:41:04Z │ 16763 │
│ vulnerabilities │ ubuntu:22.10 │ true │ 2023-08-21T16:41:04Z │ 14506 │
│ vulnerabilities │ ubuntu:23.04 │ true │ 2023-08-21T16:41:04Z │ 14044 │
│ vulnerabilities │ wolfi:rolling │ true │ 2023-08-21T16:41:04Z │ 353 │
└─────────────────┴────────────────────┴─────────┴──────────────────────┴──────────────┘
Using the Config File to Include/Exclude Feeds at System Bootstrap
The most common way to set which feeds are synced is in the config.yaml for the policy engine. By default,
the vulnerabilities, nvdv2, and github feeds are synced to provide good vulnerability matching support for a variety of Linux distros
and application package types. Normally it will not be necessary to modify that set.
To disable a feed or enable a disabled feed, modify the config.yaml’s feeds section to:
feeds:
selective_sync:
enabled: true
feeds:
vulnerabilities: true
nvdv2: true
github: true
packages: false
Those boolean values can be used to enable/disable the feeds. Note that changes will require a restart of the policy engine to take effect and settng
a feed to ‘false’ will not remove any data or show in the API or via AnchoreCTL, it will simply skip updates during sync operations.
7.2 - Feed Synchronization
When Anchore Enterprise runs it will begin to synchronize security feed data from the Anchore feed service.
CVE data for Linux distributions such as Alpine, CentOS, Debian, Oracle, Red Hat and Ubuntu will be downloaded. The initial sync may take anywhere from 10 to 60 minutes depending on the speed of your network connection.
Checking Feed Status
Feed information can be retrieved through the API and AnchoreCTL.
# anchorectl feed list
✔ List feed
┌─────────────────┬────────────────────┬─────────┬──────────────────────┬──────────────┐
│ FEED │ GROUP │ ENABLED │ LAST SYNC │ RECORD COUNT │
├─────────────────┼────────────────────┼─────────┼──────────────────────┼──────────────┤
│ vulnerabilities │ github:composer │ true │ 2023-08-21T16:41:04Z │ 2148 │
│ vulnerabilities │ github:gem │ true │ 2023-08-21T16:41:04Z │ 700 │
│ vulnerabilities │ github:go │ true │ 2023-08-21T16:41:04Z │ 1176 │
│ vulnerabilities │ github:java │ true │ 2023-08-21T16:41:04Z │ 3848 │
│ vulnerabilities │ github:npm │ true │ 2023-08-21T16:41:04Z │ 3450 │
│ vulnerabilities │ github:nuget │ true │ 2023-08-21T16:41:04Z │ 496 │
│ vulnerabilities │ github:python │ true │ 2023-08-21T16:41:04Z │ 1966 │
│ vulnerabilities │ github:rust │ true │ 2023-08-21T16:41:04Z │ 628 │
│ vulnerabilities │ nvd │ true │ 2023-08-21T16:41:04Z │ 223049 │
│ vulnerabilities │ alpine:3.10 │ true │ 2023-08-21T16:41:04Z │ 2321 │
│ vulnerabilities │ alpine:3.11 │ true │ 2023-08-21T16:41:04Z │ 2659 │
│ vulnerabilities │ alpine:3.12 │ true │ 2023-08-21T16:41:04Z │ 3193 │
│ vulnerabilities │ alpine:3.13 │ true │ 2023-08-21T16:41:04Z │ 3684 │
│ vulnerabilities │ alpine:3.14 │ true │ 2023-08-21T16:41:04Z │ 4265 │
│ vulnerabilities │ alpine:3.15 │ true │ 2023-08-21T16:41:04Z │ 4760 │
│ vulnerabilities │ alpine:3.16 │ true │ 2023-08-21T16:41:04Z │ 5146 │
│ vulnerabilities │ alpine:3.17 │ true │ 2023-08-21T16:41:04Z │ 5399 │
│ vulnerabilities │ alpine:3.18 │ true │ 2023-08-21T16:41:04Z │ 5566 │
│ vulnerabilities │ alpine:3.2 │ true │ 2023-08-21T16:41:04Z │ 305 │
│ vulnerabilities │ alpine:3.3 │ true │ 2023-08-21T16:41:04Z │ 470 │
│ vulnerabilities │ alpine:3.4 │ true │ 2023-08-21T16:41:04Z │ 679 │
│ vulnerabilities │ alpine:3.5 │ true │ 2023-08-21T16:41:04Z │ 902 │
│ vulnerabilities │ alpine:3.6 │ true │ 2023-08-21T16:41:04Z │ 1075 │
│ vulnerabilities │ alpine:3.7 │ true │ 2023-08-21T16:41:04Z │ 1461 │
│ vulnerabilities │ alpine:3.8 │ true │ 2023-08-21T16:41:04Z │ 1671 │
│ vulnerabilities │ alpine:3.9 │ true │ 2023-08-21T16:41:04Z │ 1955 │
│ vulnerabilities │ alpine:edge │ true │ 2023-08-21T16:41:04Z │ 5571 │
│ vulnerabilities │ amzn:2 │ true │ 2023-08-21T16:41:04Z │ 1381 │
│ vulnerabilities │ amzn:2022 │ true │ 2023-08-21T16:41:04Z │ 276 │
│ vulnerabilities │ amzn:2023 │ true │ 2023-08-21T16:41:04Z │ 300 │
│ vulnerabilities │ chainguard:rolling │ true │ 2023-08-21T16:41:04Z │ 378 │
│ vulnerabilities │ debian:10 │ true │ 2023-08-21T16:41:04Z │ 27731 │
│ vulnerabilities │ debian:11 │ true │ 2023-08-21T16:41:04Z │ 27886 │
│ vulnerabilities │ debian:12 │ true │ 2023-08-21T16:41:04Z │ 26675 │
│ vulnerabilities │ debian:13 │ true │ 2023-08-21T16:41:04Z │ 26359 │
│ vulnerabilities │ debian:7 │ true │ 2023-08-21T16:41:04Z │ 20455 │
│ vulnerabilities │ debian:8 │ true │ 2023-08-21T16:41:04Z │ 24058 │
│ vulnerabilities │ debian:9 │ true │ 2023-08-21T16:41:04Z │ 28240 │
│ vulnerabilities │ debian:unstable │ true │ 2023-08-21T16:41:04Z │ 30185 │
│ vulnerabilities │ mariner:1.0 │ true │ 2023-08-21T16:41:04Z │ 2096 │
│ vulnerabilities │ mariner:2.0 │ true │ 2023-08-21T16:41:04Z │ 1774 │
│ vulnerabilities │ ol:5 │ true │ 2023-08-21T16:41:04Z │ 1255 │
│ vulnerabilities │ ol:6 │ true │ 2023-08-21T16:41:04Z │ 1695 │
│ vulnerabilities │ ol:7 │ true │ 2023-08-21T16:41:04Z │ 2005 │
│ vulnerabilities │ ol:8 │ true │ 2023-08-21T16:41:04Z │ 1372 │
│ vulnerabilities │ ol:9 │ true │ 2023-08-21T16:41:04Z │ 359 │
│ vulnerabilities │ rhel:5 │ true │ 2023-08-21T16:41:04Z │ 6995 │
│ vulnerabilities │ rhel:6 │ true │ 2023-08-21T16:41:04Z │ 8720 │
│ vulnerabilities │ rhel:7 │ true │ 2023-08-21T16:41:04Z │ 8452 │
│ vulnerabilities │ rhel:8 │ true │ 2023-08-21T16:41:04Z │ 4828 │
│ vulnerabilities │ rhel:9 │ true │ 2023-08-21T16:41:04Z │ 1752 │
│ vulnerabilities │ sles:11 │ true │ 2023-08-21T16:41:04Z │ 594 │
│ vulnerabilities │ sles:11.1 │ true │ 2023-08-21T16:41:04Z │ 6125 │
│ vulnerabilities │ sles:11.2 │ true │ 2023-08-21T16:41:04Z │ 3291 │
│ vulnerabilities │ sles:11.3 │ true │ 2023-08-21T16:41:04Z │ 7081 │
│ vulnerabilities │ sles:11.4 │ true │ 2023-08-21T16:41:04Z │ 6583 │
│ vulnerabilities │ sles:12 │ true │ 2023-08-21T16:41:04Z │ 5948 │
│ vulnerabilities │ sles:12.1 │ true │ 2023-08-21T16:41:04Z │ 6205 │
│ vulnerabilities │ sles:12.2 │ true │ 2023-08-21T16:41:04Z │ 8306 │
│ vulnerabilities │ sles:12.3 │ true │ 2023-08-21T16:41:04Z │ 10161 │
│ vulnerabilities │ sles:12.4 │ true │ 2023-08-21T16:41:04Z │ 10121 │
│ vulnerabilities │ sles:12.5 │ true │ 2023-08-21T16:41:04Z │ 10728 │
│ vulnerabilities │ sles:15 │ true │ 2023-08-21T16:41:04Z │ 8738 │
│ vulnerabilities │ sles:15.1 │ true │ 2023-08-21T16:41:04Z │ 8852 │
│ vulnerabilities │ sles:15.2 │ true │ 2023-08-21T16:41:04Z │ 8455 │
│ vulnerabilities │ sles:15.3 │ true │ 2023-08-21T16:41:04Z │ 8753 │
│ vulnerabilities │ sles:15.4 │ true │ 2023-08-21T16:41:04Z │ 8678 │
│ vulnerabilities │ sles:15.5 │ true │ 2023-08-21T16:41:04Z │ 7753 │
│ vulnerabilities │ ubuntu:12.04 │ true │ 2023-08-21T16:41:04Z │ 14934 │
│ vulnerabilities │ ubuntu:12.10 │ true │ 2023-08-21T16:41:04Z │ 5641 │
│ vulnerabilities │ ubuntu:13.04 │ true │ 2023-08-21T16:41:04Z │ 4117 │
│ vulnerabilities │ ubuntu:14.04 │ true │ 2023-08-21T16:41:04Z │ 32822 │
│ vulnerabilities │ ubuntu:14.10 │ true │ 2023-08-21T16:41:04Z │ 4437 │
│ vulnerabilities │ ubuntu:15.04 │ true │ 2023-08-21T16:41:04Z │ 6220 │
│ vulnerabilities │ ubuntu:15.10 │ true │ 2023-08-21T16:41:04Z │ 6489 │
│ vulnerabilities │ ubuntu:16.04 │ true │ 2023-08-21T16:41:04Z │ 29968 │
│ vulnerabilities │ ubuntu:16.10 │ true │ 2023-08-21T16:41:04Z │ 8607 │
│ vulnerabilities │ ubuntu:17.04 │ true │ 2023-08-21T16:41:04Z │ 9094 │
│ vulnerabilities │ ubuntu:17.10 │ true │ 2023-08-21T16:41:04Z │ 7900 │
│ vulnerabilities │ ubuntu:18.04 │ true │ 2023-08-21T16:41:04Z │ 24446 │
│ vulnerabilities │ ubuntu:18.10 │ true │ 2023-08-21T16:41:04Z │ 8368 │
│ vulnerabilities │ ubuntu:19.04 │ true │ 2023-08-21T16:41:04Z │ 8635 │
│ vulnerabilities │ ubuntu:19.10 │ true │ 2023-08-21T16:41:04Z │ 8416 │
│ vulnerabilities │ ubuntu:20.04 │ true │ 2023-08-21T16:41:04Z │ 18500 │
│ vulnerabilities │ ubuntu:20.10 │ true │ 2023-08-21T16:41:04Z │ 9979 │
│ vulnerabilities │ ubuntu:21.04 │ true │ 2023-08-21T16:41:04Z │ 11310 │
│ vulnerabilities │ ubuntu:21.10 │ true │ 2023-08-21T16:41:04Z │ 12627 │
│ vulnerabilities │ ubuntu:22.04 │ true │ 2023-08-21T16:41:04Z │ 16763 │
│ vulnerabilities │ ubuntu:22.10 │ true │ 2023-08-21T16:41:04Z │ 14506 │
│ vulnerabilities │ ubuntu:23.04 │ true │ 2023-08-21T16:41:04Z │ 14044 │
│ vulnerabilities │ wolfi:rolling │ true │ 2023-08-21T16:41:04Z │ 353 │
└─────────────────┴────────────────────┴─────────┴──────────────────────┴──────────────┘
This command will report list the feeds synchronized by Anchore Enterprise, last sync time and current record count.
Note: Time is reported as UTC, not local time.
Manually initiating feed sync
After the initial sync has completed the system will run an incremental sync at a user defined period, by default every 4 hours. At any time a feed sync can be initiated through the API or AnchoreCTL.
A sync operation can be manually initiated by running the system feeds sync command however this should not be required under normal operation.
# anchorectl feed sync
✔ Sync feed
┌─────────────────┬─────────┬──────────────┐
│ FEED │ STATUS │ TIME TO SYNC │
├─────────────────┼─────────┼──────────────┤
│ vulnerabilities │ success │ 0 │
└─────────────────┴─────────┴──────────────┘
Anchore Enterprise can be instructed to flush the current feed data and perform a full synchronization.
NOTE: Under normal circumstances this operation should not be performed since Anchore Enterprise performs regular incremental sync.
NOTE: This process may take anywhere from 10 to 60 minutes depending on the speed of your network connection, and will cause interruptions in regular operations during sync. It is included for testing and troubleshooting scenarios only.
# anchorectl feed sync --flush
✔ Sync feed
┌─────────────────┬─────────┬──────────────┐
│ FEED │ STATUS │ TIME TO SYNC │
├─────────────────┼─────────┼──────────────┤
│ vulnerabilities │ success │ 0 │
└─────────────────┴─────────┴──────────────┘
8 - Accounts and Users
System Initialization
When the system first initializes it creates a system service account (invisible to users) and a administrator account (admin) with a single administrator user (admin). The password for this user is set at bootstrap using a default value or an override available in the config.yaml on the catalog service (which is what initializes the db). There are two top-level keys in the config.yaml that control this bootstrap:
default_admin_password - To set the initial password (can be updated by using the API once the system is bootstrapped). Defaults to foobar if omitted or unset.
default_admin_email - To set the initial admin account email on bootstrap. Defaults to admin@myanchore if unset
Managing Accounts Using AnchoreCTL
These operations must be executed by a user in the admin account. These examples are executed from within the enterprise-api container if using the quickstart guide:
First, exec
into the enterprise-api container, if using the quickstart docker-compose. For other deployment types (eg. helm chart into kubernetes), execute these commands anywhere you have AnchoreCTL installed that can reach the external API endpoint for you deployment.
docker-compose exec enterprise-api /bin/bash
To list all the currently present accounts in the system, perform the following command:
# anchorectl account list
✔ Fetched accounts
┌──────────┬────────────────────┬──────────┐
│ NAME │ EMAIL │ STATE │
├──────────┼────────────────────┼──────────┤
│ admin │ admin@myanchore │ enabled │
│ devteam1 │ [email protected] │ enabled │
│ devteam2 │ [email protected] │ enabled │
└──────────┴────────────────────┴──────────┘
To review the list of users for a specific account, issue the following:
# anchorectl user list --account devteam1
✔ Fetched users
┌───────────────┬──────────────────────┬──────────────────────┬────────┬────────┐
│ USERNAME │ CREATED AT │ LAST UPDATED │ SOURCE │ TYPE │
├───────────────┼──────────────────────┼──────────────────────┼────────┼────────┤
│ devteam1admin │ 2022-08-25T17:43:43Z │ 2022-08-25T17:43:43Z │ │ native │
└───────────────┴──────────────────────┴──────────────────────┴────────┴────────┘
Adding a New Account
To add a new account which, by default, will have no active credentials, issue the following command:
# anchorectl account add devteam1 --email [email protected]
✔ Added account
Name: devteam1
Email: [email protected]
State: enabled#
Note that the email address is optional and can be omitted.
At this point the account exists but contains no users. To create a user with a password, see below in the Managing Users section.
Disabling Account
Disabling an account prevents any of that account’s users from being able to perform any actions in the system. It also disabled all asynchronous updates on resources in that account, effectively freezing the state of the account and all of its resources. Disabling an account is idempotent, if it is already disabled the operation has no effect. Accounts may be re-enabled after being disabled.
Note
# anchorectl account disable devteam1
✔ Disabled account
State: disabled
Enabling an Account
To restore a disabled account to allow user operations and resource updates, simply enable it. This is idempotent, enabling an already enabled account has no effect.
# anchorectl account enable devteam1
✔ Enabled account
State: enabled
Deleting an Account
Note: Deleting an account is irreversible and will delete all of its resources (images, policies, evaluations, etc).
Deleting an account will synchronously delete all users and credentials for the account and transition the account to the deleting state. At this point the system will begin reaping all resources for the account. Once that reaping process is complete, the account record itself is deleted. An account must be in a disabled state prior to deletion. Failure to be in this state results in an error:
# anchorectl account delete devteam1
error: 1 error occurred:
* unable to delete account:
{
"detail": {
"error_codes": []
},
"httpcode": 400,
"message": "Invalid account state change requested. Cannot go from state enabled to state deleting"
}
So, first you must disable the account, as shown above. Once disabled:
# anchorectl account disable devteam1
✔ Disabled account
State: disabled
# anchorectl account delete devteam1
✔ Deleted account
No results
# anchorectl account get devteam1
✔ Fetched account
Name: devteam1
Email: [email protected]
State: deleting
Managing Users Using AnchoreCTL
Users exist within accounts, but usernames themselves are globally unique since they are used for authenticating api requests. User management can be performed by any user in the admin account in the default Anchore Enterprise configuration using the native authorizer. For more information on configuring other authorization plugins see: Authorization Plugins and Configuration.
Create User in a User-Type Account
To create a new user credential within a specified account, you can issue the following command. Note that the ‘role’ assigned will dictate the API/operation level permissions granted to this new user. See help output for a list of available roles, or for more information you can review roles and associated permissions via the Anchore Enterprise UI. In the following example, we’re granting the new user the ‘full-control’ role, which gives the credential full access to operations within the ‘devteam1’ account namespace.
# ANCHORECTL_USER_PASSWORD=devteam1adminp4ssw0rd anchorectl user add --account devteam1 devteam1admin --role full-control
✔ Added user devteam1admin
Username: devteam1admin
Created At: 2022-08-25T17:50:18Z
Last Updated: 2022-08-25T17:50:18Z
Source:
Type: native
# anchorectl user list --account devteam1
✔ Fetched users
┌───────────────┬──────────────────────┬──────────────────────┬────────┬────────┐
│ USERNAME │ CREATED AT │ LAST UPDATED │ SOURCE │ TYPE │
├───────────────┼──────────────────────┼──────────────────────┼────────┼────────┤
│ devteam1admin │ 2022-08-25T17:50:18Z │ 2022-08-25T17:50:18Z │ │ native │
└───────────────┴──────────────────────┴──────────────────────┴────────┴────────┘
That user may now use the API:
# ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=devteam1adminp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 anchorectl user list
✔ Fetched users
┌───────────────┬──────────────────────┬──────────────────────┬────────┬────────┐
│ USERNAME │ CREATED AT │ LAST UPDATED │ SOURCE │ TYPE │
├───────────────┼──────────────────────┼──────────────────────┼────────┼────────┤
│ devteam1admin │ 2022-08-25T17:50:18Z │ 2022-08-25T17:50:18Z │ │ native │
└───────────────┴──────────────────────┴──────────────────────┴────────┴────────┘
Deleting a User
Using the admin credential, or a credential that has a user management role assigned for an account, you can delete a user with the following command. In this example, we’re using the admin credential to delete a user in the ‘devteam1’ account:
ANCHORECTL_USERNAME=admin ANCHORECTL_ACCOUNT=admin ANCHORECTL_PASSWORD=foobar anchorectl user delete devteam1admin --account devteam1
✔ Deleted user
No results
Updating a User Password
Note that only system admins can execute this for a different user/account.
As an admin, to reset another users credentials:
# ANCHORECTL_USER_PASSWORD=n3wp4ssw0rd anchorectl user set-password devteam1admin --account devteam1
✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T17:58:32Z
To update your own password:
# ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=existingp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 anchorectl user set-password devteam1admin
❖ Enter new user password : ●●●●●●●●●●●
❖ Retype new user password : ●●●●●●●●●●●
✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T18:00:35Z
Or, to perform the operation fully-scripted, you can set the new password as an environment variable:
ANCHORECTL_USERNAME=devteam1admin ANCHORECTL_PASSWORD=existingp4ssw0rd ANCHORECTL_ACCOUNT=devteam1 ANCHORECTL_USER_PASSWORD=n3wp4ssw0rd anchorectl user set-password devteam1admin
✔ User password set
Type: password
Value: ***********
Created At: 2022-08-25T18:01:19Z
9 - Event Log
Introduction
The event log subsystem provides the users with a mechanism to inspect asynchronous events occurring across various Anchore Enterprise services. Anchore events include periodically triggered activities such as vulnerability data feed syncs in the policy-engine service, image analysis failures originating from the analyzer service, and other informational or system fault events. The catalog service may also generate events for any repositories or image tags that are being watched, when the engine encounters connectivity, authentication, authorization or other errors in the process of checking for updates. The event log is aimed at troubleshooting most common failure scenarios (especially those that happen during asynchronous engine operations) and to pinpoint the reasons for failures, that can be used subsequently to help with corrective actions. Events can be cleared from anchore-engine in bulk or individually.
The Anchore events (drawn from the event log) can be accessed through the Anchore Enterprise API and AnchoreCTL, or can be emitted as webhooks if your Anchore Enterprise is configured to send webhook notifications. For API usage refer to the document on using the Anchore Enterprise API.
Accessing Events
The anchorectl command can be used to list events and filter through the results, get the details for a specific event and delete events matching certain criteria.
# anchorectl event --help
Event related operations
Usage:
event [command]
Available Commands:
delete Delete an event by its ID or set of filters
get Lookup an event by its event ID
list Returns a paginated list of events in the descending order of their occurrence
Flags:
-h, --help help for event
Use " event [command] --help" for more information about a command.
For help regarding global flags, run --help on the root command
For a list of the most recent events:
anchorectl event list
✔ List events
┌──────────────────────────────────┬──────────────────────────────────────────────┬───────┬─────────────────────────────────────────────────────────────────────────┬─────────────────┬────────────────┬────────────────────┬─────────────────────────────┐
│ UUID │ EVENT TYPE │ LEVEL │ RESOURCE ID │ RESOURCE TYPE │ SOURCE SERVICE │ SOURCE HOST │ TIMESTAMP │
├──────────────────────────────────┼──────────────────────────────────────────────┼───────┼─────────────────────────────────────────────────────────────────────────┼─────────────────┼────────────────┼────────────────────┼─────────────────────────────┤
│ 8c179a3b27a543fe9285cf4feb65561d │ system.image_analysis.registry_lookup_failed │ error │ docker.io/alpine:3.4 │ image_reference │ catalog │ anchore-quickstart │ 2022-08-24T23:08:30.54001Z │
│ 48c18a84575d45efbf5b41e0f3a87177 │ system.image_analysis.registry_lookup_failed │ error │ docker.io/alpine:latest │ image_reference │ catalog │ anchore-quickstart │ 2022-08-24T23:08:30.510193Z │
│ f6084efd159c43a1a0518b6df5e58505 │ system.image_analysis.registry_lookup_failed │ error │ docker.io/alpine:3.12 │ image_reference │ catalog │ anchore-quickstart │ 2022-08-24T23:08:30.480625Z │
│ 4464b8f83df046388152067122c03610 │ system.image_analysis.registry_lookup_failed │ error │ docker.io/alpine:3.8 │ image_reference │ catalog │ anchore-quickstart │ 2022-08-24T23:08:30.450983Z │
...
│ 60f14821ff1d407199bc0bde62f537df │ system.image_analysis.restored_from_archive │ info │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest │ catalog │ anchore-quickstart │ 2022-08-24T22:53:12.662535Z │
│ cd749a99dca8493889391ae549d1bbc7 │ system.analysis_archive.image_archived │ info │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest │ catalog │ anchore-quickstart │ 2022-08-24T22:48:45.719941Z │
...
└──────────────────────────────────┴──────────────────────────────────────────────┴───────┴─────────────────────────────────────────────────────────────────────────┴─────────────────┴────────────────┴────────────────────┴─────────────────────────────┘
Note: Events are ordered by the timestamp of their occurrence, the most recent events are at the top of the list and the least recent events at the bottom.
There are a number of ways to filter the event list output (see anchorectl event list --help
for filter options):
For troubleshooting events related to a specific event type:
# anchorectl event list --event-type system.analysis_archive.image_archive_failed
✔ List events
┌──────────────────────────────────┬──────────────────────────────────────────────┬───────┬──────────────┬───────────────┬────────────────┬────────────────────┬────────────────────────────┐
│ UUID │ EVENT TYPE │ LEVEL │ RESOURCE ID │ RESOURCE TYPE │ SOURCE SERVICE │ SOURCE HOST │ TIMESTAMP │
├──────────────────────────────────┼──────────────────────────────────────────────┼───────┼──────────────┼───────────────┼────────────────┼────────────────────┼────────────────────────────┤
│ 35114639be6c43a6b79d1e0fef71338a │ system.analysis_archive.image_archive_failed │ error │ nginx:latest │ image_digest │ catalog │ anchore-quickstart │ 2022-08-24T22:48:23.18113Z │
└──────────────────────────────────┴──────────────────────────────────────────────┴───────┴──────────────┴───────────────┴────────────────┴────────────────────┴────────────────────────────┘
To filter events by level such as ERROR or INFO:
anchorectl event list --level info
✔ List events
┌──────────────────────────────────┬─────────────────────────────────────────────┬───────┬─────────────────────────────────────────────────────────────────────────┬───────────────┬────────────────┬────────────────────┬─────────────────────────────┐
│ UUID │ EVENT TYPE │ LEVEL │ RESOURCE ID │ RESOURCE TYPE │ SOURCE SERVICE │ SOURCE HOST │ TIMESTAMP │
├──────────────────────────────────┼─────────────────────────────────────────────┼───────┼─────────────────────────────────────────────────────────────────────────┼───────────────┼────────────────┼────────────────────┼─────────────────────────────┤
│ 60f14821ff1d407199bc0bde62f537df │ system.image_analysis.restored_from_archive │ info │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest │ catalog │ anchore-quickstart │ 2022-08-24T22:53:12.662535Z │
│ cd749a99dca8493889391ae549d1bbc7 │ system.analysis_archive.image_archived │ info │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ image_digest │ catalog │ anchore-quickstart │ 2022-08-24T22:48:45.719941Z │
...
Note: Event listing response is paginated, anchorectl displays the first 100 events matching the filters. For all the results use the –all flag.
All available options for listing events:
# anchorectl event list --help
Returns a paginated list of events in the descending order of their occurrence. Optional query parameters may be used for filtering results
Usage:
event list [flags]
Flags:
--all return all events (env: ANCHORECTL_EVENT_ALL)
--before string return events that occurred before the ISO8601 formatted UTC timestamp
(env: ANCHORECTL_EVENT_BEFORE)
--event-type string filter events by a prefix match on the event type (e.g. "user.image.")
(env: ANCHORECTL_EVENT_TYPE)
-h, --help help for list
--host string filter events by the originating host ID (env: ANCHORECTL_EVENT_SOURCE_HOST_ID)
--level string filter events by the level - INFO or ERROR (env: ANCHORECTL_EVENT_LEVEL)
-o, --output string the format to show the results (allowable: [text json json-raw id]; env: ANCHORECTL_FORMAT) (default "text")
--page int32 return the nth page of results starting from 1. Defaults to first page if left empty
(env: ANCHORECTL_PAGE)
--resource-type string filter events by the type of resource - tag, imageDigest, repository etc
(env: ANCHORECTL_EVENT_RESOURCE_TYPE)
--service string filter events by the originating service (env: ANCHORECTL_EVENT_SOURCE_SERVICE_NAME)
--since string return events that occurred after the ISO8601 formatted UTC timestamp
(env: ANCHORECTL_EVENT_SINCE)
For help regarding global flags, run --help on the root command
Event listing displays a brief summary of the event, to get more detailed information about the event such as the host where the event has occurred or the underlying the error:
# anchorectl event get c31eb023c67a4c9e95278473a026970c
✔ Fetched event
UUID: c31eb023c67a4c9e95278473a026970c
Event:
Event Type: system.image_analysis.registry_lookup_failed
Level: error
Message: Referenced image not found in registry
Resource:
Resource ID: docker.io/aerospike:latest
Resource Type: image_reference
User Id: admin
Source:
Source Service: catalog
Base Url: http://catalog:8228
Source Host: anchore-quickstart
Request Id:
Timestamp: 2022-08-24T22:08:28.811441Z
Category:
Details: cannot fetch image digest/manifest from registry
Created At: 2022-08-24T22:08:28.812749Z
Clearing Events
Events can be cleared/deleted from the system in bulk or individually. Bulk deletion allows for specifying filters to clear the events within a certain time window. To delete all events from the system:
# anchorectl event delete --all
Use the arrow keys to navigate: ↓ ↑ → ←
? Are you sure you want to delete all events:
▸ Yes
No
⠙ Deleting event
c31eb023c67a4c9e95278473a026970c
329ff24aa77549458e2656f1a6f4c98f
649ba60033284b87b6e3e7ab8de51e48
4010f105cf264be6839c7e8ca1a0c46e
...
Delete events before a specified timestamp (can also use --since
instead of --before
to delete events that were generated after a specified timestamp):
# anchorectl event delete --before 2022-08-24T22:08:28.629543Z
✔ Deleted event
ce26f1fa1baf4adf803d35c86d7040b7
081394b6e62f4708a10e521a960c54d7
d21b587dea5844cc9c330ba2b3d02d2e
7784457e6bf84427a175658f134f3d6a
...
Delete a specific event:
# anchorectl event delete fa110d517d2e43faa8d8e2dfbb0596af
✔ Deleted event
fa110d517d2e43faa8d8e2dfbb0596af
Sending Events as Webhook Notifications
In addition to access via API and AnchoreCTL, the Anchore Enterprise may be configured to send notifications for events as they are generated in the system via its webhook subsystem. Webhook notifications for event log records is turned off by default. To turn enable the ’event_update’ webhook, uncomment the ’event_log’ section under ‘services->catalog’ in config.yaml, as in the following example:
services:
...
catalog:
...
event_log:
notification:
enabled: True
# (optional) notify events that match these levels. If this section is commented, notifications for all events are sent
level:
- error
Note: In order for events to be sent via webhook notifications, you’ll need to ensure that the webhook subsystem is configured in config.yaml (if it isn’t already) - refer to the document on subscriptions and notifications for information on how to enable webhooks in Anchore Enterprise. Event notifications will be sent to ’event_update’ webhook endpoint if it is defined, and the ‘general’ webhook endpoint otherwise.