This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Anchore Enterprise Release Notes - Version 3.1.0

Anchore Enterprise 3.1.0

This release adds new capabilities for automated runtime inventory scanning, runtime container compliance checks, a new vulnerability scanner option in tech preview, a new enterprise CLI, as well as other improvements and fixes.

New Features

Runtime Kubernetes Inventory Scanning with UI Support

Building on the runtime inventory features in the 3.0 release, Anchore can now automatically analyze images reported as in use in kubernetes clusters so that you can easily assess the security risks not only of image in your CI pipelines, but also in production in your clusters. Additionally, the UI now supports visualizations of kubernetes inventories and the vulnerability and policy compliance status of the inventory by namespace or cluster.

See Runtime Kubernetes Inventory for more information.

Runtime Compliance Checks for Containers

Anchore now includes the ability to execute and collect runtime compliance checks using industry standard tooling such as OpenSCAP to provide evaluation of running containers’ STIG compliance or any other compliance specification that can be described and checked using XCCDF profiles.

See Compliance Checks for more information on this feature.

New CLI with Integrated Pipeline Scanning Support

The new anchorectl tool provides a new Enterprise-focused CLI experience with support for local analysis of images to import into your deployment. Using the new tool you can also perform other Enterprise operations such as interacting with new compliance reports and viewing or configuring inventory scanning.

Tech Preview Features

A new vulnerability scanner based on Grype is now available in tech preview. See Vulnerability Scanner V2 for more information. This update is not configured by default and must be set by opt-in using a configuration value.

Enterprise Service Changes

This release contains a database schema update to version 0.0.8 for the enterprise schema and 0.0.15 for the engine schema. The upgrade process will modify the db schema and update some tables in the reporting service for any existing runtime inventory records. Unless you have a very large number of inventory records, the upgrade should complete in seconds to minutes depending on your database size.

Owned Package Filtering Control

A new configuration option: services.analyzer.enable_owned_package_filtering: is now available in the analyzer service configuration. By default, the analyzer will filter packages that are determined at analysis time to be “owned” by a parent package when that package installs all the files of the child package. That behavior can be disabled by setting this configuration value to “false”.

The default filtering removes false positives associated with packages installed by distro packages that install language packages like python, npms, or gems and have backports applied by the distro maintainer with no corresponding language package version change. However, if you package your own applications as rpms, debs, or similar and need to ensure all included packages are scanned directly against NVD sources, then you can disable this behavior.

Added

  • New tech-preview vulnerability scanner
  • Improved alpine vulnerability scanning by using NVD matches for OS packages for CVEs that are not yet present in Alpine SecDB
  • Analyzer service configuration option to control package-ownership filtering. Allows exposing all packages regardless of ownership relationship

Fixed

  • Adds missing fields and fixes errors in the swagger spec for the API
  • Restores file package verification data ingress during image load to fix a regression
  • Malware policy gate can fail causing policy eval error when malware not enabled and other rules precede malware rule in a policy
  • JSON serialization error in internal policy engine user image listing API
  • “package_cpe23” field missing in vulnerabilities
  • Ensure python38 used in the Dockerfile build, and set tox tests to only run py38
  • User to not be able to delete some notification configurations when they should be able based on RBAC role

Improved

  • Performance of GET operations between services improved by better streaming memory management for large payload transfers
  • Use UBI 8.4 as base image in Docker build
  • Updates skopeo version used to 1.2.1, allowing removal of the ’lookuptag’ field in the POST /repositories call for watching repositories that do not have a ’latest’ tag
  • RedHat packages for an Out-of-Support distro release version now indicated as being vulnerable if a newer distro release version is supported and indicated as affected for the package.

Additional minor bug fixes and enhancements

Known Issues/Errata

Note: the policy engine feed sync configuration is now in the policy engine service configuration as part of the provider configuration. The provided helm charts, docker-compose.yaml and default configurations handle this change automatically.

Deprecations

The affected_package_version query parameter in GET /query/vulnerabilities is not supported in the V2 scanner (aka Grype mode) and has known correctness issues in the legacy mode. It is deprecated and will be removed in a future release.

Enterprise UI Changes

Added

  • From the new Kubernetes Runtime Inventory view you can now inspect the spread of compliance and vulnerability information reported by the KAI agent across all detected Kubernetes clusters and namespaces in your deployment topology
  • Information relating to any items detected by the runtime agent is now surfaced in the repository- and tag-level views within the Image Selection hierarchy

Improved

  • If the reporting service fails, feature components that require this service as a dependency will be disabled in the navigation bar until service recovery
  • Pie-chart components have been restructured to present selected information inclusively when segments are clicked—other segments are now disabled

Fixes

  • Printable view assembly issues addressed in Image Analysis Vulnerability and Compliance views—charts now render correctly in portrait mode
  • The alerts banner is now subject to RBAC and will not appear if the fetch alert permission is not detected
  • Clipping issues resolved in the creation date popup in the Policy Bundle view
  • Supporting libraries have been updated in order to improve security, performance, and also to remove deprecation warnings from browser and server output logs

Additional minor bug fixes and enhancements

Upgrading

1 - STIG

Overview

You can use the Anchore runtime compliance API to gain insight into the security compliance of runtime environments. Tools responsible for executing compliance checks on a running environment are the intended consumers of this general-purpose API, such as the Security Technical Implementation Guides (STIGs) that users can run on a Kubernetes cluster using Anchore’s Remote Execution Manager (REM). These tools can upload the results of an execution to Anchore through this new compliance API, which allows users to leverage additional Anchore functionality like reporting and correlating the runtime environment to images analyzed by Anchore. This enables deeper understanding and insight into an image’s lifecycle and the ongoing security of the runtime environments deploying them.

Usage

The Compliance API can be found in the Enterprise API swagger specification. This API allows for the creation and retrieval of runtime compliance checks and any document reports provided in the creation calls.

The following is an example of the body of an API call to create a runtime compliance check using the Compliance API to be submitted as a multipart form to support file upload:

{
  "check_type": "oscap", // type of compliance check to report
  "result": "pass", // overall result of compliance check
  "pod": "postgres-9.6", // k8s or kubernetes pod the compliance check was run against
  "namespace": "dev", // the namespace of the pod
  "image_tag": "9.6", // tag of the image that the pod is running
  "image_digest": "sha256:a435b8edc3bdb4d766818dc6ce22ca3a5e6a922d19ca7001afd1359d060500eb", // the digest of the running image
  "start_time": "2021-03-22T15:12:24.580054", // start time of the compliance run
  "end_time": "2021-03-22T16:02:24.580054" // end time of the compliance run
  "result_file": "path_to_file",
  "report_file": "path_to_file
}

Two fields are required for the creation of runtime compliance checks. The type field references the type of scan that generates the report. The only supported option is oscap, which stands for OpenSCAP. The other required field is image_digest, which represents the image used by the container that the runtime compliance check was run against.

While not required, the status attribute is used to designate whether the given compliance check has passed or failed. There are several additional metadata fields provided to further contextualize the runtime check, such as the pod and namespace that the check was run against.

One of the other key functionalities of this API is the ability to attach a report_file and a result_file to the created runtime compliance checks. This can be the direct output generated by the runtime tool itself, such as an OpenSCAP XML document. This allows for entire reports to be stored within Anchore using the object storage, which allows for a number of options for how and where this data will be preserved.

Once created, runtime compliance checks can be retrieved using the GET endpoint specified in the Swagger spec. The corresponding result and report files can be retrieved by pulling the file_ids from a runtime compliance check and querying the endpoint for runtime compliance results using the specified result_id.

1.1 - REM

Remote Compliance Check

Anchore Enterprise Remote Execution Manager (REM) enables an operator to run a STIG compliance check for a defined container within a Kubernetes Cluster. REM contains functionality to perform package management such as installation and removal of OpenSCAP, retrieval of generated results files, and upload capabilities to the compliance API. There is also a provided local data-store if upload functionality is disabled or unavailable.

Installation

REM releases are uploaded to a public AWS S3 bucket.

To install REM, you can use either the AWS CLI or cURL to retrieve both the binary and the default configuration for REM.

Retrieving the default configuration file is the same regardless of which operating system you’re using:

curl -o rem.yaml https://anchore-rem-releases.s3-us-west-2.amazonaws.com/v0.1.9/rem.yaml

macOS dmg

curl -o rem.dmg https://anchore-rem-releases.s3-us-west-2.amazonaws.com/v0.1.9/rem_0.1.9_darwin_amd64.dmg

macOS Tar

curl -o rem.tar.gz https://anchore-rem-releases.s3-us-west-2.amazonaws.com/v0.1.9/rem_0.1.9_darwin_amd64.tar.gz

Debian

curl -o rem.deb https://anchore-rem-releases.s3-us-west-2.amazonaws.com/v0.1.9/rem_0.1.9_linux_amd64.deb

RPM

curl -o rem.rpm https://anchore-rem-releases.s3-us-west-2.amazonaws.com/v0.1.9/rem_0.1.9_linux_amd64.rpm

Linux Tar

curl -o rem.tar.gz https://anchore-rem-releases.s3-us-west-2.amazonaws.com/v0.1.9/rem_0.1.9_linux_amd64.tar.gz

Windows

curl -o rem.zip https://anchore-rem-releases.s3-us-west-2.amazonaws.com/v0.1.9/rem_0.1.9_windows_amd64.zip

Usage:

REM can work well out-of-the-box with minimal required configurations.

At the very least, REM needs to be able to authenticate with the Kubernetes API, know which command to run, and know which pod and container to connect to. If you have a Kube Config at ~/.kube/config, REM will use that by default.

To see how to configure REM with these minimal details, see the Pod Configuration section

Shell Completion

REM supports completion for BASH, Zsh, and Fish shells. Run rem completion -h for more information.

Configuration

REM will search for the configuration file in a few locations:

The following examples are listed in the order of precedence.

From the CLI you can pass a -f or --config flag with the path to the configuration file.

> rem -f /tmp/anchore/config.yaml

Setting an Environment variable:

> export REM_CONFIGPATH="/tmp/anchore/config.yaml"

Current directory of execution:

./rem.yaml
.rem/config.yaml

User home directory path:

~/.rem.yaml

XDG configured directory path:

rem/config.yaml

It is always recommended to use the configuration file that is attached to each release as an artifact. The example configuration file in the repository is a good reference for explaining which configuration key does what.

Pod configuration

This section will describe the minimum required configuration required for REM to work.

In the file, you can specify kubernetes pod information in the following section:

# This section tells REM the execution details for the STIG check report:
# Pod Name, Namespace, and Container Name are required so that REM knows where to exec the stig check
report:
  podName: "centos"
  nameSpace: "default"
  containerName: "centos"

  # These must be set via the file, and correspond to the command being executed in the container
  # For example, if your compliance check command looks like this:
  #   oscap xccdf eval --profile <profile> --results /tmp/anchore/result.xml --report /tmp/anchore/report.html target.xml
  # The values should for --results and --report should match the values of these configurations.
  # The file paths defined here are also where REM downloads the files from the container. You can think of it like this:
  #    docker cp container:/tmp/anchore/report.html /tmp/anhore/report.html
  reportFile: "/tmp/anchore/report.html" 
  resultFile: "/tmp/anchore/result.xml"

# REM supports Kubernetes Configuration in the following manner:
#   1. If you have a Kubeconfig at ~/.kube/config, you don't need to set any of these fields below, REM will just use that
#   2. If you want to explicitly specify kubernetes configuration details, you can do so in each field below (ignore path)
#   3. If you are running REM within Kubernetes, set path to "use-in-cluster" and set cluster to the cluster name and you don't need to set any of the other fields
kubeconfig:
  path: "" # set to "use-in-cluster" if running REM within a kubernetes container
  cluster: ""
  clusterCert: # base64 encoded cluster cert
  server:  # ex. https://kubernetes.docker.internal:6443
  user:
    type:  # valid: [private_key, token]
    clientCert: # if type==private_key, base64 encoded client cert
    privateKey: # if type==private_key, base64 encoded private key
    token: # plaintext service account token

As an alternative, or a way to override the setting in the configuration file on the command line, you can pass a few flags to set new values.

Here, <cmd> is the full oscap command to execute within the container, and the args before the double hyphen '--' are telling REM where to run the command
$ rem kexec -n <namespace> -p <pod> -c <container> -k <kubeconfig-path-override> -- <cmd>

Example (this will use kubeconfig at ~/.kube/config)
$ rem kexec -n default -p anchore-pod -c anchore-container -- oscap xccdf eval --profile standard --result /tmp/result.xml --report /tmp/report.html target.xml 

Note: The double hyphen -- is important because it tells REM that all subsequent flags should be passed to the container command

A full list of the options supported by the rem kexec command can be found by running the command with the -h or --help option i.e.

rem kexec --help

Compliance Tool Installation

Enable the following section in the configuration file.

command:
  .
  ..
    oscap:
      # This boolean flag tells REM whether or not to try to install OpenSCAP into the container (if the command is oscap)
      installEnabled: true
    
      # This boolean flag tells REM whether or not to try to uninstall OpenSCAP from the container
      # (after the oscap command runs and the result/report files get downloaded)
      uninstallEnabled: true

After the installation option has been enabled this will allow the operator to manually install the compliance tool or allow REM to automatically install the missing tool needed to run the compliance check.

note: uninstallEnabled can be set to false if you intend on leaving the tool available.

Running the following will install OpenSCAP but this is not mandatory.

> rem kexec install oscap

Run a compliance check

There are two options on how to run the check. The first is from the command line. The second method is to have REM read it from the configuration file.

From the command line

> rem kexec oscap -- xccdf eval --profile xccdf_org.ssgproject.content_profile_stig --fetch-remote-resources --results /tmp/anchore/result.xml --report /tmp/anchore/report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml

From configuration file

command:
  # If no command is specified through arguments passed to the application on the command line, this command will be used
  # Each element of the list is interpreted as part of the command
  # I.E. echo 'hello-world' > /tmp/test.txt would look like:
  #   cmd:
  #     - echo
  #     - 'hello-world' > /tmp/tst.txt
  cmd: oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_stig --fetch-remote-resources --results /tmp/anchore/result.xml --report /tmp/anchore/report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml

Once the check has completed the report and results file should be located in the set path passed into openSCAP.

Custom STIG targets

REM has the option to allow the operator to specify a custom target by setting a path under customTargetPath.

# If a custom OSCAP profile is desired, specify it's path here
# Note: this will be placed into a /tmp/anchore/ directory in the container at runtime, so the command being executed
customTargetPath: <local path to target>/custom.xml

Audit uploads

REM has an audit database which is used to track which compliance checks have been successfully run, this also serves as a method to ensure fault tolerance in the case where reports have not been uploaded do to unavailable service connections to Enterprise. REM will mark those uploads as incomplete allowing the operator to issue a flush command and push the remainders to Enterprise.

Database subcommand

To list the current state for all past transactions issue the following command:

> rem db list

In order to retreive detailed information about a transaction use the db get command with the id:

> rem db get 1

To push all results which have been marked as not uploaded, issue the follow command:

note: the –dryrun flag will show you the records which will be processed

> rem db upload

2 - Tech Preview - V2 Vulnerability Scanner

Tech Preview: V2 Vulnerability Scanner

A new vulnerability scanning engine, based on Grype improves performance, reduces database load, and provides better vulnerability matching results. It includes a new vulnerability feed sync process integrated into the enterprise feed service that also provides faster feed syncs from the feed service to the policy engine.

Tech Preview Status

Note: The v2 scanner is intended for use in sandbox or staging environments in the current release. It is not possible to run both vulnerability scanners at the same time. This configuration is picked up at bootstrap, and cannot be changed on a running system.

  1. The new mode must be set at deployment time, the scanner is configured at service startup.
  2. Switching modes in a deployment is not supported.
  3. Downgrading from the v2 scanner back to the legacy scanner is not supported.
  4. Some features of the policy system are not yet supported in this mode:
  5. vulnerability gate triggers not supported for the new scanner. They will return incorrect results when used.
    1. vulnerability_data_unavailable
    2. stale_feed_data policy
  6. Windows container scanning is not yet supported. Support will be added in the next feature release.
  7. Proprietary vulnerability feeds are not yet supported in this scanner. Support will be added in the next feature release.

Running with docker compose

  1. Install or update to Anchore Enterprise 3.1.0
  2. Add the following environment variable to the policy engine container section of the docker compose file:
    policy-engine:
      ...
      environment:
      ...
      - ANCHORE_VULNERABILITIES_PROVIDER=grype
  1. Redeploy the services.

Running with helm

  1. Install or update to Anchore Engine 0.10.0.
  2. Update the following value in your values.yaml configuration file. See the chart README for more details on configuring this file:
    anchorePolicyEngine
      ...
      vulnerabilityProvider: grype
    
  1. Redeploy the services
    helm upgrade

After making the relevant change above and redeploying, the system will start up with the v2 vulnerability scanner enabled and will sync the latest version of grype db as built by your local feed service. Note that legacy feeds will no longer be synced while the v2 scanner is configured. All vulnerability data and scanning will now come from the ‘grypedb’ feed.

Vulnerability Feed Data and Syncs

The v2 scanner has its own feed sync mechanism that generates a Grype vulnerability DB from your locally installed feed service instead of https://ancho.re as used by Grype itself. This results in a much faster sync process since the DB is packaged as a single database file. It also reduces load on the Engine DB since the scanner matching and syncs do not require large amounts of writes into the Engine DB. The Grype vulnerability DB is built from the same sources as the legacy service, so there is no reduction in scan coverage or vulnerabilities sources supported.

The feed synced by the Grype provider is identified as feed name ‘grypedb’ when using the feed listing API or anchore-cli system feeds list CLI command.