This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Deploying Anchore Enterprise

Anchore Enterprise and its components are delivered as Docker container images which can be deployed as co-located, fully distributed, or anything in-between. Anchore Enterprise can run on a single host or be deployed in a scale out pattern for increased analysis throughput.

To get up and running, jump to the following guides of your choosing:

Enterprise Container Images

Enterprise Cloud Images

Anchore CTL

1 - Requirements

This section details the general requirements for running Anchore Enterprise. For a conceptual understanding of Anchore Enterprise, please see the Overview topic prior to deploying the software.

Runtime

Anchore Enterprise requires a Docker compatible runtime (version 1.12 or higher). Deployment is supported on:

  • Docker Compose (for demo or proof-of-concept and small deployments)
  • Any Kubernetes Certified Service Provider (KSCP) as certified by the Cloud Native Computing Foundation (CNCF) via Helm.
  • Any Kubernetes Certified Distribution as certified by the Cloud Native Computing Foundation (CNCF) via Helm.
  • Amazon Elastic Container Service (ECS) via Helm.

Resourcing

Use-case and usage patterns will determine the resource requirements for Anchore Enterprise. When deploying via Helm, requests and limits are set in the values.yaml file. When deploying via docker compose, add reservations and limits into your docker compose file. The following recommendations can get you started:

  • Requests specify the desired resource amounts for the container, while limits specify the maximum resource amounts the container is allowed. We have found that setting the request and limit to the same value provides the best quality of service (QoS) from Kubernetes. We do not recommend setting limits for CPU.

  • We do not recommend setting less than 1 CPU unit for any containers. Less than this could result in unexpected behaviour and should only be used in testing scenarios.

  • For the catalog, policy and postgresql service containers, we recommend a minimum of 2 CPU units.

  • We do not recommend setting memory units to less than 8G except for API and UI services, where we recommend starting at 4G. Less than these values could result in OOM errors or containers restarting unexpectedly.

If you intend on using K8s, the default values.yaml found in the Anchore Enterprise Helm Chart provides some resourcing recommendations to get you started.

Database

The only service dependency strictly required by Anchore Enterprise is a PostgreSQL database (13.0 or higher) that all services connect to, but do not use for communication beyond some very simple service registration/lookup processes. The database is centralized simply for ease of management and operation. For more information, go to Anchore Enterprise Architecture. Anchore Enterprise uses this database to provide persistent storage for image, policy and analysis data.

A PostgreSQL database ships with the default deployment mechanisms for Anchore Enterprise. This is often referred to as the Anchore-managed database. This can be run in a container, as configured in the example Docker Compose file and default Helm values file.

The PostgreSQL database requirement can also be provided as an external service to Anchore Enterprise. PostgreSQL compatible databases, such as Amazon RDS for PostgreSQL, can be used for highly-scalable cloud deployments.

Network

An Anchore Enterprise deployment requires the following three categories of network access:

  • Service Access
    • Connectivity between Anchore Enterprise services, including access to an external database.
  • Registry Access
    • Network connectivity, including DNS resolution, to the registries from which Anchore Enterprise needs to download images.
  • Anchore Data Service Access
    • Anchore Enterprise requires access to the datasets in order to perform analysis and vulnerability matching. See Anchore Enterprise Data Feeds for more information.

Security

Anchore Enterprise is deployed as source repositories or container images that can be run manually using Docker Compose, Kubernetes or any other supported container platform.

By default, Anchore Enterprise does not require any special permissions. It can be run as an unprivileged container with no access to the underlying Docker host.

Note: Anchore Enterprise can be configured to pull images through the Docker Socket. However, this configuration is not recommended, as it grants the Anchore Enterprise container added privileges, and may incur a performance impact on the Docker Host.

Storage

Anchore Enterprise can be configured to depend on other storage for various artifacts. For full details on storage configuration, see Storage Configuration.

  • Configuration volumes: this volume is used to provide persistent storage to the container from which it will read its configuration files, and optionally - certificates. Requirement: Less than 1MB.
  • [Optional] Scratch space: this temporary storage volume is recommended but not required. During the analysis of images, Anchore Enterprise downloads and extracts all of the layers required for an image. These layers are extracted and analyzed, after which, the layers and extracted data are deleted. If a temporary storage is not configured, then the container’s ephemeral storage will be used to store temporary files. However, performance is likely be improved by using a dedicated volume.
  • [Optional] Layer cache: another temporary storage volume may also be used for image-layer caching to speed up analysis. This caches image layers for re-use by analyzers when generating an SBOM / analyzing an image.
  • [Optional] Object storage Anchore Enterprise stores documents containing archives of image analysis data and policies as JSON documents. By default, these documents are stored within the PostgreSQL database. However, Anchore Enterprise can be configured to store archive documents in a filesystem (volume), S3 Object store, or Swift Object Store. Requirement: Number of images x 10MB (estimated).

Enterprise UI

The Anchore Enterprise UI module interfaces with Anchore API using the external API endpoint. The UI requires access to the Anchore database where it creates its own namespace for persistent configuration storage. Additionally, a Redis database deployed and managed by Anchore Enterprise through the supported deployment mechanisms is used to store session information.

  • Network
    • Ingress
      • The Anchore UI module publishes a web UI service by default on port 3000, however, this port can be remapped.
    • Egress
      • The Anchore UI module requires access to three network services at the minimum:
        • External API endpoint (typically port 8228)
        • Redis Database (typically port 6379)
        • PostgreSQL Database (typically port 5432)
  • Redis Service
    • Version 7 or higher

Optimizing your Deployment

Optimizing your Anchore deployment on Kubernetes, involves various strategies to enhance performance, reliability, and scalability. Here are some key tips:

  • Ensure that your Analyzer, API, Catalog, and Policy service containers have adequate CPU and memory resources. Each service has reference recommendations which can be found in the Anchore Enterprise chart values.yaml.
  • Integrate with monitoring tools like Prometheus and Grafana to monitor key metrics like CPU, memory usage, analysis times, and feed sync status. You can also Set up alerts for critical thresholds. Follow our guide on Prometheus and Grafana setup Monitoring guides
  • For large deployments, it is good practice to Schedule regular vacuuming, indexing, and performance tuning to keep the database running efficiently.
  • Layer caching in Docker can significantly speed up the image build process by reusing layers that haven’t changed, reducing build times and improving efficiency. Follow our guide on Layer Caching setup

Next Steps

If you feel you have a solid grasp of the requirements for deploying Anchore Enterprise, we recommend following one of our installation guides.

2 - Deploy using Docker Compose

In this topic, you’ll learn how to use Docker Compose to get up and running with a stand-alone Anchore Enterprise deployment.

Before moving further with Anchore Enterprise, it is highly recommended to read the Overview sections to gain a deeper understanding of fundamentals, concepts, and proper usage.

Before You Start

The following instructions assume you are using a system running Docker v1.12 or higher, and a version of Docker Compose that supports at least v2 of the docker compose configuration format.

  • A stand-alone deployment requires at least 16GB of RAM, and enough disk space available to support the largest container images or source repositories that you intend to analyze. It is recommended to consider three times the largest source repository or container image size. For small testing, like basic Linux distro images or database images, between 20GB and 40GB of disk space should be sufficient.
  • To access Anchore Enterprise, you need a valid license.yaml file that has been issued to you by Anchore. If you do not have a license yet, visit the Anchore Contact page to request one.
  • You need root or sudo access to the system where you will be running docker and deploying Anchore Enterprise, all commands in this document are run as root.

Getting Started

Follow the steps below to get up and running!

Step 1: Check access to images

You’ll need authenticated access to the anchore/enterprise and anchore/enterprise-ui repositories on DockerHub. Anchore Customer Success will provide a Dockerhub PAT (Personal Access Token) for access to images. Login with your Docker PAT to push and pull images from Docker Hub:

# docker login -u <your_dockerhub_pat_user> -p <your_dockerhub_pat>

Step 2: Configure & run

Download the Docker Compose File into a working directory where you have also placed the license.yaml file you got from Anchore.

# curl https://docs.anchore.com/current/docs/deployment/docker_compose/docker-compose.yaml > docker-compose.yaml
# cp <path/to/your/license.yaml> ./license.yaml

Edit the compose file to set all instances of ANCHORE_ADMIN_PASSWORD to a strong password of your choice.

- ANCHORE_ADMIN_PASSWORD=yourstrongpassword

Then start your environment from your working directory:

# docker compose up -d

Step 3: Install AnchoreCTL

Next, we’ll install the lightweight Anchore Enterprise client tool, quickly test using the version operation, and set up a few environment variables to allow it to interact with your deployment using the admin password you defined in the previous step.

# curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b /usr/local/bin v5.16.0

# ./anchorectl version
Application:        anchorectl
Version:            5.16.0
SyftVersion:        v0.97.1
BuildDate:          2023-11-21T22:09:54Z
GitCommit:          f7604438b45f7161c11145999897d4ae3efcb0c8
GitDescription:     v5.16.0
Platform:           linux/amd64
GoVersion:          go1.21.1
Compiler:           gc

# export ANCHORECTL_URL="http://localhost:8228"
# export ANCHORECTL_USERNAME="admin"
# export ANCHORECTL_PASSWORD="yourstrongpassword"

Step 4: Verify service availability

After a few minutes (depending on system speed) Anchore Enterprise and Anchore UI services should be up and running, ready to use. You can verify the containers are running with docker compose, as shown in the following example.

# docker compose ps
             Name                           Command                  State               Ports         
-------------------------------------------------------------------------------------------------------
anchorequickstart_analyzer_1          /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchorequickstart_anchore-db_1        docker-entrypoint.sh postgres    Up             5432/tcp              
anchorequickstart_api_1               /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8228->8228/tcp
anchorequickstart_catalog_1           /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp           
anchorequickstart_data-syncer_1       /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8778->8228/tcp  
anchorequickstart_notifications_1     /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8668->8228/tcp
anchorequickstart_policy-engine_1     /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchorequickstart_queue_1             /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp    
anchorequickstart_reports_1           /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8558->8228/tcp
anchorequickstart_reports_worker_1    /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:55427->8228/tcp
anchorequickstart_ui-redis_1          docker-entrypoint.sh redis ...   Up             6379/tcp              
anchorequickstart_ui_1                /docker-entrypoint.sh node ...   Up             0.0.0.0:3000->3000/tcp

You can then run a command to get the status of the Anchore Enterprise services:


# ./anchorectl system status
 ✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 5160       │ 5.16.0       │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 5160       │ 5.16.0       │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 5160       │ 5.16.0       │
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 5160       │ 5.16.0       │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 5160       │ 5.16.0       │
│ data_syncer     │ anchore-quickstart │ http://data-syncer:8228     │ true │ available      | 5160       │ 5.16.0       │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 5160       │ 5.16.0       │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 5160       │ 5.16.0       │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 5160       │ 5.16.0       │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

Note: The first time you run Anchore Enterprise, vulnerability data will sync to the system in a few minutes. For the best experience, wait until the core vulnerability data feeds have completed before proceeding.
You can check the status of your feed sync using AnchoreCTL:

# ./anchorectl feed list    
 ✔ List feed                                                                                                                                                                                                                                                            
┌────────────────────────────────────────────┬────────────────────┬─────────┬──────────────────────┬──────────────┐
│ FEED                                       │ GROUP              │ ENABLED │ LAST UPDATED         │ RECORD COUNT │
├────────────────────────────────────────────┼────────────────────┼─────────┼──────────────────────┼──────────────┤
│ ClamAV Malware Database                    │ clamav_db          │ true    │ 2024-09-30T18:06:05Z │ 1            │
│ Vulnerabilities                            │ github:composer    │ true    │ 2024-09-30T18:12:03Z │ 4040         │
│ Vulnerabilities                            │ github:dart        │ true    │ 2024-09-30T18:12:03Z │ 8            │
│ Vulnerabilities                            │ github:gem         │ true    │ 2024-09-30T18:12:03Z │ 817          │
│ Vulnerabilities                            │ github:go          │ true    │ 2024-09-30T18:12:03Z │ 1879         │
│ Vulnerabilities                            │ github:java        │ true    │ 2024-09-30T18:12:03Z │ 5060         │
│ Vulnerabilities                            │ github:npm         │ true    │ 2024-09-30T18:12:03Z │ 15619        │
│ Vulnerabilities                            │ github:nuget       │ true    │ 2024-09-30T18:12:03Z │ 624          │
│ Vulnerabilities                            │ github:python      │ true    │ 2024-09-30T18:12:03Z │ 3229         │
│ Vulnerabilities                            │ github:rust        │ true    │ 2024-09-30T18:12:03Z │ 804          │
│ Vulnerabilities                            │ github:swift       │ true    │ 2024-09-30T18:12:03Z │ 32           │
│ Vulnerabilities                            │ msrc:10378         │ true    │ 2024-09-30T18:12:16Z │ 2668         │
│ Vulnerabilities                            │ msrc:10379         │ true    │ 2024-09-30T18:12:16Z │ 2645         │
│ Vulnerabilities                            │ msrc:10481         │ true    │ 2024-09-30T18:12:16Z │ 1951         │
│ Vulnerabilities                            │ msrc:10482         │ true    │ 2024-09-30T18:12:16Z │ 2028         │
│ Vulnerabilities                            │ msrc:10483         │ true    │ 2024-09-30T18:12:16Z │ 2822         │
│ Vulnerabilities                            │ msrc:10484         │ true    │ 2024-09-30T18:12:16Z │ 1934         │
│ Vulnerabilities                            │ msrc:10543         │ true    │ 2024-09-30T18:12:16Z │ 2796         │
│ Vulnerabilities                            │ msrc:10729         │ true    │ 2024-09-30T18:12:16Z │ 2908         │
│ Vulnerabilities                            │ msrc:10735         │ true    │ 2024-09-30T18:12:16Z │ 3006         │
│ Vulnerabilities                            │ msrc:10788         │ true    │ 2024-09-30T18:12:16Z │ 466          │
│ Vulnerabilities                            │ msrc:10789         │ true    │ 2024-09-30T18:12:16Z │ 437          │
│ Vulnerabilities                            │ msrc:10816         │ true    │ 2024-09-30T18:12:16Z │ 3328         │
│ Vulnerabilities                            │ msrc:10852         │ true    │ 2024-09-30T18:12:16Z │ 3043         │
│ Vulnerabilities                            │ msrc:10853         │ true    │ 2024-09-30T18:12:16Z │ 3167         │
│ Vulnerabilities                            │ msrc:10855         │ true    │ 2024-09-30T18:12:16Z │ 3300         │
│ Vulnerabilities                            │ msrc:10951         │ true    │ 2024-09-30T18:12:16Z │ 716          │
│ Vulnerabilities                            │ msrc:10952         │ true    │ 2024-09-30T18:12:16Z │ 766          │
│ Vulnerabilities                            │ msrc:11453         │ true    │ 2024-09-30T18:12:16Z │ 1240         │
│ Vulnerabilities                            │ msrc:11454         │ true    │ 2024-09-30T18:12:16Z │ 1290         │
│ Vulnerabilities                            │ msrc:11466         │ true    │ 2024-09-30T18:12:16Z │ 395          │
│ Vulnerabilities                            │ msrc:11497         │ true    │ 2024-09-30T18:12:16Z │ 1454         │
│ Vulnerabilities                            │ msrc:11498         │ true    │ 2024-09-30T18:12:16Z │ 1514         │
│ Vulnerabilities                            │ msrc:11499         │ true    │ 2024-09-30T18:12:16Z │ 981          │
│ Vulnerabilities                            │ msrc:11563         │ true    │ 2024-09-30T18:12:16Z │ 1344         │
│ Vulnerabilities                            │ msrc:11568         │ true    │ 2024-09-30T18:12:16Z │ 2993         │
│ Vulnerabilities                            │ msrc:11569         │ true    │ 2024-09-30T18:12:16Z │ 3095         │
│ Vulnerabilities                            │ msrc:11570         │ true    │ 2024-09-30T18:12:16Z │ 2900         │
│ Vulnerabilities                            │ msrc:11571         │ true    │ 2024-09-30T18:12:16Z │ 3266         │
│ Vulnerabilities                            │ msrc:11572         │ true    │ 2024-09-30T18:12:16Z │ 3238         │
│ Vulnerabilities                            │ msrc:11583         │ true    │ 2024-09-30T18:12:16Z │ 1038         │
│ Vulnerabilities                            │ msrc:11644         │ true    │ 2024-09-30T18:12:16Z │ 1054         │
│ Vulnerabilities                            │ msrc:11645         │ true    │ 2024-09-30T18:12:16Z │ 1089         │
│ Vulnerabilities                            │ msrc:11646         │ true    │ 2024-09-30T18:12:16Z │ 1055         │
│ Vulnerabilities                            │ msrc:11647         │ true    │ 2024-09-30T18:12:16Z │ 1074         │
│ Vulnerabilities                            │ msrc:11712         │ true    │ 2024-09-30T18:12:16Z │ 1442         │
│ Vulnerabilities                            │ msrc:11713         │ true    │ 2024-09-30T18:12:16Z │ 1491         │
│ Vulnerabilities                            │ msrc:11714         │ true    │ 2024-09-30T18:12:16Z │ 1447         │
│ Vulnerabilities                            │ msrc:11715         │ true    │ 2024-09-30T18:12:16Z │ 999          │
│ Vulnerabilities                            │ msrc:11766         │ true    │ 2024-09-30T18:12:16Z │ 912          │
│ Vulnerabilities                            │ msrc:11767         │ true    │ 2024-09-30T18:12:16Z │ 915          │
│ Vulnerabilities                            │ msrc:11768         │ true    │ 2024-09-30T18:12:16Z │ 940          │
│ Vulnerabilities                            │ msrc:11769         │ true    │ 2024-09-30T18:12:16Z │ 934          │
│ Vulnerabilities                            │ msrc:11800         │ true    │ 2024-09-30T18:12:16Z │ 382          │
│ Vulnerabilities                            │ msrc:11801         │ true    │ 2024-09-30T18:12:16Z │ 1277         │
│ Vulnerabilities                            │ msrc:11802         │ true    │ 2024-09-30T18:12:16Z │ 1277         │
│ Vulnerabilities                            │ msrc:11803         │ true    │ 2024-09-30T18:12:16Z │ 981          │
│ Vulnerabilities                            │ msrc:11896         │ true    │ 2024-09-30T18:12:16Z │ 792          │
│ Vulnerabilities                            │ msrc:11897         │ true    │ 2024-09-30T18:12:16Z │ 762          │
│ Vulnerabilities                            │ msrc:11898         │ true    │ 2024-09-30T18:12:16Z │ 763          │
│ Vulnerabilities                            │ msrc:11923         │ true    │ 2024-09-30T18:12:16Z │ 1733         │
│ Vulnerabilities                            │ msrc:11924         │ true    │ 2024-09-30T18:12:16Z │ 1726         │
│ Vulnerabilities                            │ msrc:11926         │ true    │ 2024-09-30T18:12:16Z │ 1536         │
│ Vulnerabilities                            │ msrc:11927         │ true    │ 2024-09-30T18:12:16Z │ 1503         │
│ Vulnerabilities                            │ msrc:11929         │ true    │ 2024-09-30T18:12:16Z │ 1433         │
│ Vulnerabilities                            │ msrc:11930         │ true    │ 2024-09-30T18:12:16Z │ 1429         │
│ Vulnerabilities                            │ msrc:11931         │ true    │ 2024-09-30T18:12:16Z │ 1474         │
│ Vulnerabilities                            │ msrc:12085         │ true    │ 2024-09-30T18:12:16Z │ 1044         │
│ Vulnerabilities                            │ msrc:12086         │ true    │ 2024-09-30T18:12:16Z │ 1053         │
│ Vulnerabilities                            │ msrc:12097         │ true    │ 2024-09-30T18:12:16Z │ 964          │
│ Vulnerabilities                            │ msrc:12098         │ true    │ 2024-09-30T18:12:16Z │ 939          │
│ Vulnerabilities                            │ msrc:12099         │ true    │ 2024-09-30T18:12:16Z │ 943          │
│ Vulnerabilities                            │ nvd                │ true    │ 2024-09-30T18:12:10Z │ 264156       │
│ Vulnerabilities                            │ alpine:3.10        │ true    │ 2024-09-30T18:11:54Z │ 2321         │
│ Vulnerabilities                            │ alpine:3.11        │ true    │ 2024-09-30T18:11:54Z │ 2659         │
│ Vulnerabilities                            │ alpine:3.12        │ true    │ 2024-09-30T18:11:54Z │ 3193         │
│ Vulnerabilities                            │ alpine:3.13        │ true    │ 2024-09-30T18:11:54Z │ 3684         │
│ Vulnerabilities                            │ alpine:3.14        │ true    │ 2024-09-30T18:11:54Z │ 4265         │
│ Vulnerabilities                            │ alpine:3.15        │ true    │ 2024-09-30T18:11:54Z │ 4815         │
│ Vulnerabilities                            │ alpine:3.16        │ true    │ 2024-09-30T18:11:54Z │ 5271         │
│ Vulnerabilities                            │ alpine:3.17        │ true    │ 2024-09-30T18:11:54Z │ 5630         │
│ Vulnerabilities                            │ alpine:3.18        │ true    │ 2024-09-30T18:11:54Z │ 6144         │
│ Vulnerabilities                            │ alpine:3.19        │ true    │ 2024-09-30T18:11:54Z │ 6348         │
│ Vulnerabilities                            │ alpine:3.2         │ true    │ 2024-09-30T18:11:54Z │ 305          │
│ Vulnerabilities                            │ alpine:3.20        │ true    │ 2024-09-30T18:11:54Z │ 6444         │
│ Vulnerabilities                            │ alpine:3.3         │ true    │ 2024-09-30T18:11:54Z │ 470          │
│ Vulnerabilities                            │ alpine:3.4         │ true    │ 2024-09-30T18:11:54Z │ 679          │
│ Vulnerabilities                            │ alpine:3.5         │ true    │ 2024-09-30T18:11:54Z │ 902          │
│ Vulnerabilities                            │ alpine:3.6         │ true    │ 2024-09-30T18:11:54Z │ 1075         │
│ Vulnerabilities                            │ alpine:3.7         │ true    │ 2024-09-30T18:11:54Z │ 1461         │
│ Vulnerabilities                            │ alpine:3.8         │ true    │ 2024-09-30T18:11:54Z │ 1671         │
│ Vulnerabilities                            │ alpine:3.9         │ true    │ 2024-09-30T18:11:54Z │ 1955         │
│ Vulnerabilities                            │ alpine:edge        │ true    │ 2024-09-30T18:11:54Z │ 6467         │
│ Vulnerabilities                            │ amzn:2             │ true    │ 2024-09-30T18:11:48Z │ 2280         │
│ Vulnerabilities                            │ amzn:2022          │ true    │ 2024-09-30T18:11:48Z │ 276          │
│ Vulnerabilities                            │ amzn:2023          │ true    │ 2024-09-30T18:11:48Z │ 736          │
│ Vulnerabilities                            │ chainguard:rolling │ true    │ 2024-09-30T18:12:02Z │ 4487         │
│ Vulnerabilities                            │ debian:10          │ true    │ 2024-09-30T18:11:55Z │ 32021        │
│ Vulnerabilities                            │ debian:11          │ true    │ 2024-09-30T18:11:55Z │ 33574        │
│ Vulnerabilities                            │ debian:12          │ true    │ 2024-09-30T18:11:55Z │ 32529        │
│ Vulnerabilities                            │ debian:13          │ true    │ 2024-09-30T18:11:55Z │ 31702        │
│ Vulnerabilities                            │ debian:7           │ true    │ 2024-09-30T18:11:55Z │ 20455        │
│ Vulnerabilities                            │ debian:8           │ true    │ 2024-09-30T18:11:55Z │ 24058        │
│ Vulnerabilities                            │ debian:9           │ true    │ 2024-09-30T18:11:55Z │ 28240        │
│ Vulnerabilities                            │ debian:unstable    │ true    │ 2024-09-30T18:11:55Z │ 35992        │
│ Vulnerabilities                            │ mariner:1.0        │ true    │ 2024-09-30T18:12:11Z │ 2092         │
│ Vulnerabilities                            │ mariner:2.0        │ true    │ 2024-09-30T18:12:11Z │ 2627         │
│ Vulnerabilities                            │ ol:5               │ true    │ 2024-09-30T18:12:01Z │ 1255         │
│ Vulnerabilities                            │ ol:6               │ true    │ 2024-09-30T18:12:01Z │ 1709         │
│ Vulnerabilities                            │ ol:7               │ true    │ 2024-09-30T18:12:01Z │ 2199         │
│ Vulnerabilities                            │ ol:8               │ true    │ 2024-09-30T18:12:01Z │ 1910         │
│ Vulnerabilities                            │ ol:9               │ true    │ 2024-09-30T18:12:01Z │ 874          │
│ Vulnerabilities                            │ rhel:5             │ true    │ 2024-09-30T18:12:06Z │ 7193         │
│ Vulnerabilities                            │ rhel:6             │ true    │ 2024-09-30T18:12:06Z │ 11129        │
│ Vulnerabilities                            │ rhel:7             │ true    │ 2024-09-30T18:12:06Z │ 11376        │
│ Vulnerabilities                            │ rhel:8             │ true    │ 2024-09-30T18:12:06Z │ 7007         │
│ Vulnerabilities                            │ rhel:9             │ true    │ 2024-09-30T18:12:06Z │ 4040         │
│ Vulnerabilities                            │ sles:11            │ true    │ 2024-09-30T18:12:19Z │ 594          │
│ Vulnerabilities                            │ sles:11.1          │ true    │ 2024-09-30T18:12:19Z │ 6125         │
│ Vulnerabilities                            │ sles:11.2          │ true    │ 2024-09-30T18:12:19Z │ 3291         │
│ Vulnerabilities                            │ sles:11.3          │ true    │ 2024-09-30T18:12:19Z │ 7081         │
│ Vulnerabilities                            │ sles:11.4          │ true    │ 2024-09-30T18:12:19Z │ 6583         │
│ Vulnerabilities                            │ sles:12            │ true    │ 2024-09-30T18:12:19Z │ 6018         │
│ Vulnerabilities                            │ sles:12.1          │ true    │ 2024-09-30T18:12:19Z │ 6205         │
│ Vulnerabilities                            │ sles:12.2          │ true    │ 2024-09-30T18:12:19Z │ 8339         │
│ Vulnerabilities                            │ sles:12.3          │ true    │ 2024-09-30T18:12:19Z │ 10396        │
│ Vulnerabilities                            │ sles:12.4          │ true    │ 2024-09-30T18:12:19Z │ 10215        │
│ Vulnerabilities                            │ sles:12.5          │ true    │ 2024-09-30T18:12:19Z │ 12444        │
│ Vulnerabilities                            │ sles:15            │ true    │ 2024-09-30T18:12:19Z │ 8737         │
│ Vulnerabilities                            │ sles:15.1          │ true    │ 2024-09-30T18:12:19Z │ 9245         │
│ Vulnerabilities                            │ sles:15.2          │ true    │ 2024-09-30T18:12:19Z │ 9573         │
│ Vulnerabilities                            │ sles:15.3          │ true    │ 2024-09-30T18:12:19Z │ 10074        │
│ Vulnerabilities                            │ sles:15.4          │ true    │ 2024-09-30T18:12:19Z │ 10438        │
│ Vulnerabilities                            │ sles:15.5          │ true    │ 2024-09-30T18:12:19Z │ 10882        │
│ Vulnerabilities                            │ sles:15.6          │ true    │ 2024-09-30T18:12:19Z │ 3778         │
│ Vulnerabilities                            │ ubuntu:12.04       │ true    │ 2024-09-30T18:12:37Z │ 14934        │
│ Vulnerabilities                            │ ubuntu:12.10       │ true    │ 2024-09-30T18:12:37Z │ 5641         │
│ Vulnerabilities                            │ ubuntu:13.04       │ true    │ 2024-09-30T18:12:37Z │ 4117         │
│ Vulnerabilities                            │ ubuntu:14.04       │ true    │ 2024-09-30T18:12:37Z │ 37919        │
│ Vulnerabilities                            │ ubuntu:14.10       │ true    │ 2024-09-30T18:12:37Z │ 4437         │
│ Vulnerabilities                            │ ubuntu:15.04       │ true    │ 2024-09-30T18:12:37Z │ 6220         │
│ Vulnerabilities                            │ ubuntu:15.10       │ true    │ 2024-09-30T18:12:37Z │ 6489         │
│ Vulnerabilities                            │ ubuntu:16.04       │ true    │ 2024-09-30T18:12:37Z │ 35057        │
│ Vulnerabilities                            │ ubuntu:16.10       │ true    │ 2024-09-30T18:12:37Z │ 8607         │
│ Vulnerabilities                            │ ubuntu:17.04       │ true    │ 2024-09-30T18:12:37Z │ 9095         │
│ Vulnerabilities                            │ ubuntu:17.10       │ true    │ 2024-09-30T18:12:37Z │ 7908         │
│ Vulnerabilities                            │ ubuntu:18.04       │ true    │ 2024-09-30T18:12:37Z │ 29591        │
│ Vulnerabilities                            │ ubuntu:18.10       │ true    │ 2024-09-30T18:12:37Z │ 8460         │
│ Vulnerabilities                            │ ubuntu:19.04       │ true    │ 2024-09-30T18:12:37Z │ 8742         │
│ Vulnerabilities                            │ ubuntu:19.10       │ true    │ 2024-09-30T18:12:37Z │ 8496         │
│ Vulnerabilities                            │ ubuntu:20.04       │ true    │ 2024-09-30T18:12:37Z │ 25673        │
│ Vulnerabilities                            │ ubuntu:20.10       │ true    │ 2024-09-30T18:12:37Z │ 10112        │
│ Vulnerabilities                            │ ubuntu:21.04       │ true    │ 2024-09-30T18:12:37Z │ 11365        │
│ Vulnerabilities                            │ ubuntu:21.10       │ true    │ 2024-09-30T18:12:37Z │ 12635        │
│ Vulnerabilities                            │ ubuntu:22.04       │ true    │ 2024-09-30T18:12:37Z │ 24135        │
│ Vulnerabilities                            │ ubuntu:22.10       │ true    │ 2024-09-30T18:12:37Z │ 14483        │
│ Vulnerabilities                            │ ubuntu:23.04       │ true    │ 2024-09-30T18:12:37Z │ 15562        │
│ Vulnerabilities                            │ ubuntu:23.10       │ true    │ 2024-09-30T18:12:37Z │ 18433        │
│ Vulnerabilities                            │ ubuntu:24.04       │ true    │ 2024-09-30T18:12:37Z │ 20148        │
│ Vulnerabilities                            │ wolfi:rolling      │ true    │ 2024-09-30T18:11:56Z │ 2906         │
│ Vulnerabilities                            │ anchore:exclusions │ true    │ 2024-09-30T18:11:56Z │ 12851        │
│ CISA KEV (Known Exploited Vulnerabilities) │ kev_db             │ true    │ 2024-09-30T18:07:21Z │ 1185         │
| Exploit Prediction Scoring System Database │ epss_db            │ true    │ 2024-11-18T18:04:12Z │ 266565       │
└────────────────────────────────────────────┴────────────────────┴─────────┴──────────────────────┴──────────────┘

As soon as you see RecordCount values set for all vulnerability groups, the system is fully populated and ready to present vulnerability results. Note that data syncs are incremental, so the next time you start up Anchore Enterprise it will be ready immediately. The AnchoreCTL includes a useful utility that will block until the feeds have completed a successful sync:


# ./anchorectl system wait
 ✔ API available                                                                                        system
 ✔ Services available                        [10 up]                                                    system
 ✔ Vulnerabilities feed ready                                                                           system

Step 5: Start using Anchore

To get started, you can add a few images to Anchore Enterprise using AnchoreCTL. Once complete, you can also run an additional AnchoreCTL command to monitor the analysis state of the added images, waiting until the images move into an ‘analyzed’ state.

# ./anchorectl image add docker.io/library/alpine:latest
 ✔ Added Image                                                                                                              docker.io/library/alpine:latest
Image:
  status:           not-analyzed (active)
  tag:              docker.io/library/alpine:latest
  digest:           sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870
  id:               9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5

# ./anchorectl image add docker.io/library/nginx:latest
 ✔ Added Image                                                                                                              docker.io/library/nginx:latest
Image:
  status:           not-analyzed (active)
  tag:              docker.io/library/nginx:latest
  digest:           sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
  id:               2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
  distro:           debian@11 (amd64)
  layers:           6

# ./anchorectl image list
 ✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────────┬────────┐
│ TAG                                                   │ DIGEST                                                                  │ ANALYSIS     │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────────┼────────┤
│ docker.io/library/alpine:latest                       │ sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870 │ analyzed     │ active │
│ docker.io/library/nginx:latest                        │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ not_analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────────┴────────┘

# ./anchorectl image add docker.io/library/nginx:latest --force --wait
 ⠏ Adding Image                                                                                                              docker.io/library/nginx:latest
 ⠼ Analyzing Image                           [analyzing]                                                                     docker.io/library/nginx:latest
...
...
 ✔ Analyzed Image                                                                                                            docker.io/library/nginx:latest
Image:
  status:           analyzed (active)
  tags:             docker.io/library/nginx:latest
  digest:           sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
  id:               2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
  distro:           debian@11 (amd64)
  layers:           6

# ./anchorectl image list
 ✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────┬────────┐
│ TAG                                                   │ DIGEST                                                                  │ ANALYSIS │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────┼────────┤
│ docker.io/library/alpine:latest                       │ sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870 │ analyzed │ active │
│ docker.io/library/nginx:latest                        │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────┴────────┘

Now that some images are in place, you can point your browser at the Anchore Enterprise UI by directing it to http://localhost:3000/.

Enter the username admin and the password you defined in the compose file to log in. These are some of the features you can use in the browser:

  • Navigate images
  • Inspect image contents
  • Perform security scans
  • Review compliance policy evaluations
  • Edit compliance policies with a complete policy editor UI
  • Manage accounts, users, and RBAC assignments
  • Review system events

Next Steps

Now that you have Anchore Enterprise running, you can begin to learn more about Anchore capabilities, architecture, concepts, and more.

Optional: Enabling Prometheus Monitoring

  1. Uncomment the following section at the bottom of the docker-compose.yaml file:

    #  # Uncomment this section to add a prometheus instance to gather metrics. This is mostly for quickstart to demonstrate prometheus metrics exported
    #  prometheus:
    #    image: docker.io/prom/prometheus:latest
    #    depends_on:
    #      - api
    #    volumes:
    #      - ./anchore-prometheus.yml:/etc/prometheus/prometheus.yml:z
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    #    ports:
    #      - "9090:9090"
    #
    
  2. For each service entry in the docker-compose.yaml, change the following to enable metrics in the API for each service

    ANCHORE_ENABLE_METRICS=false
    

    to

    ANCHORE_ENABLE_METRICS=true
    
  3. Download the example prometheus configuration into the same directory as the docker-compose.yaml file, with name anchore-prometheus.yml:

    curl https://docs.anchore.com/current/docs/deployment/anchore-prometheus.yml > anchore-prometheus.yml
    docker compose up -d
    

    Result: You should see a new container started and can access prometheus via your browser on http://localhost:9090.

Optional: Enabling Swagger UI

  1. Uncomment the following section at the bottom of the docker-compose.yaml file:

    #  # Uncomment this section to run a swagger UI service, for inspecting and interacting with the system API via a browser (http://localhost:8080 by default, change if needed in both sections below)
    #  swagger-ui-nginx:
    #    image: docker.io/nginx:latest
    #    depends_on:
    #      - api
    #      - swagger-ui
    #    ports:
    #      - "8080:8080"
    #    volumes:
    #      - ./anchore-swaggerui-nginx.conf:/etc/nginx/nginx.conf:z
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    #  swagger-ui:
    #    image: docker.io/swaggerapi/swagger-ui
    #    environment:
    #      - URL=http://localhost:8080/v2/openapi.json
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    
  2. Download the nginx configuration into the same directory as the docker-compose.yaml file, with name anchore-swaggerui-nginx.conf:

    curl https://docs.anchore.com/current/docs/deployment/anchore-swaggerui-nginx.conf > anchore-swaggerui-nginx.conf
    docker compose up -d
    

    Result: You should see a new container started, and have access Swagger UI via your browser on http://localhost:8080.

3 - Deploy on Kubernetes using Helm

The supported method for deploying Anchore Enterprise on Kubernetes is with Helm. The Anchore Enterprise Helm Chart includes configuration options for a full Enterprise deployment.

About the Helm Chart

Important Release Notes can be found in the README in the chart repository

The chart is split into global and service specific configurations for the core features, as well as global and services specific configurations for the optional Enterprise services.

  • The anchoreConfig section of the values file contains the application configuration for Anchore Enterprise. This includes the database connection information, credentials, and other application settings.
  • Anchore services run as a kubernetes deployment when installed with the Helm chart. Each service has its own section in the values file for making customizations and configuring the kubernetes deployment spec.

For a description of each service component see Anchore Enterprise Service Overview

Note If you are moving from the Anchore Engine Helm chart deployment to the updated Anchore Enterprise Helm chart, see here for further guidance.

Prerequisites

See the README in the chart repository for prerequisites before starting the deployment.

Installing the Chart

This guide covers deploying Anchore Enterprise on a Kubernetes cluster with the default configuration. Refer to the Configuration section of the chart README for additional guidance on production deployments.

  1. Create the namespace: The steps to follow will require the namespace to have been created already.

    export NAMESPACE=anchore
    
    kubectl create namespace ${NAMESPACE}
    
  2. Create a Kubernetes Secret for License File: Generate a Kubernetes secret to store your Anchore Enterprise license file.

    export NAMESPACE=anchore
    export LICENSE_PATH="license.yaml"
    
    kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=${LICENSE_PATH} -n ${NAMESPACE}
    
  3. Create a Kubernetes Secret for DockerHub Credentials: Generate another Kubernetes secret for DockerHub credentials. These credentials should have access to private Anchore Enterprise repositories. We recommend that you create a brand new DockerHub user for these pull credentials. Contact Anchore Support to obtain access.

    export NAMESPACE=anchore
    export DOCKERHUB_PASSWORD="password"
    export DOCKERHUB_USER="username"
    export DOCKERHUB_EMAIL="[email protected]"
    
    kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=${DOCKERHUB_USER} --docker-password=${DOCKERHUB_PASSWORD} --docker-email=${DOCKERHUB_EMAIL} -n ${NAMESPACE}
    
  4. Add Chart Repository & Deploy Anchore Enterprise: Create a custom values file, named anchore_values.yaml, to override any chart parameters. Refer to the Parameters section for available options.

    Important: Default passwords are specified in the chart. It’s highly recommended to modify these before deploying.

    Note: The RELEASE variable should not contain any dots.

    export NAMESPACE=anchore
    export RELEASE=my-release
    
    helm repo add anchore https://charts.anchore.io
    helm install ${RELEASE} -n ${NAMESPACE} anchore/enterprise -f anchore_values.yaml
    

    Note: This command installs Anchore Enterprise with a chart-managed PostgreSQL database, which may not be suitable for production use. See the External Database section of the chart README for details on using an external database.

  5. Post-Installation Steps: Anchore Enterprise will take some time to initialize. After the bootstrap phase, it will begin a vulnerability feed sync. Image analysis will show zero vulnerabilities, and the UI will show errors until this sync is complete. This can take several hours based on the enabled feeds. Use the following anchorectl commands to check the system status:

    export NAMESPACE=anchore
    export RELEASE=my-release
    export ANCHORECTL_URL=http://localhost:8228
    export ANCHORECTL_PASSWORD=$(kubectl get secret "${RELEASE}-enterprise" -o jsonpath='{.data.ANCHORE_ADMIN_PASSWORD}' | base64 -d -)
    
    kubectl port-forward -n ${NAMESPACE} svc/${RELEASE}-enterprise-api 8228:8228 # port forward for anchorectl in another terminal
    anchorectl system status # anchorectl defaults to the user admin, and to the password ${ANCHORECTL_PASSWORD} automatically if set
    

    Tip: List all releases using helm list

Next Steps

Now that you have Anchore Enterprise running, you can begin to learning more about Anchore Enterprise architecture, Anchore concepts, and Anchore usage.

  • To learn more about Anchore Enterprise, go to Overview
  • To learn more about Anchore Concepts, go to Concepts

3.1 - Deploying Anchore Enterprise on Azure Kubernetes Service (AKS)

This document will walk you through the deployment of Anchore Enterprise in an Azure Kubernetes Service (AKS) cluster and expose it on the public Internet.

Prerequisites

  • A running AKS cluster with worker nodes launched. See AKS Documentation for more information on this setup.
  • Helm client on local host.
  • AnchoreCTL installed on a local host.

Once you have an AKS cluster up and running with worker nodes launched, you can verity via the following command.

$ kubectl get nodes

NAME                       STATUS   ROLES   AGE     VERSION
aks-nodepool1-28659018-0   Ready    agent   4m13s   v1.13.10
aks-nodepool1-28659018-1   Ready    agent   4m15s   v1.13.10
aks-nodepool1-28659018-2   Ready    agent   4m6s    v1.13.10

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:

  • Anchore Enterprise software
  • PostgreSQL (13 or higher)
  • Redis (4)

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore, this document is intended to cover the minimum required changes to successfully deploy Anchore Enterprise in AKS.

Note: For this installation, an NGINX ingress controller will be used. You can read more about Kubernetes Ingress in AKS here.

Configurations

Make the following changes below to your anchore_values.yaml

Ingress

ingress:
  enabled: true
  labels: {}
  apiPaths:
    - /v2/
  uiPath: /
  annotations:
    kubernetes.io/ingress.class: nginx

Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

Anchore API Service

# Pod configuration for the anchore api service.
api:
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Note: Changed the service type to NodePort

Anchore Enterprise UI

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

Note: Changed service type to NodePort.

Install NGINX Ingress Controller

Using Helm, install an NGINX ingress controller in your AKS cluster.

helm install stable/nginx-ingress --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux

Deploy Anchore Enterprise

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.

Create a Kubernetes secret containing your license file:

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing DockerHub credentials with access to the private Anchore Enterprise software:

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Deploy Anchore Enterprise:

helm repo add anchore https://charts.anchore.io
helm install anchore anchore/enterprise -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl get pods

NAME                                                              READY   STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m
mangy-serval-nginx-ingress-controller-788dd98c8b-jv2wg            1/1     Running   0          21m
mangy-serval-nginx-ingress-default-backend-8686cd585b-4m2bt       1/1     Running   0          21m

We can see that NGINX ingress controller has been installed as well from the previous step. You can view the services by running the following command:

$ kubectl get services | grep ingress

mangy-serval-nginx-ingress-controller                LoadBalancer   10.0.30.174    40.114.26.147   80:31176/TCP,443:30895/TCP                     22m
mangy-serval-nginx-ingress-default-backend           ClusterIP      10.0.243.221   <none>          80/TCP                                         22m

Note: The above output shows us that IP address of the NGINX ingress controller is 40.114.26.147. Going to this address in the browser will take us to the Anchore login page.

login

Anchore System

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://40.114.26.147/v2/ ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with AnchoreCTL:

ANCHORECTL_URL=http://40.114.26.147/v2/ ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Vulnerability Management section of our documentation for more information.

3.2 - Deploying Anchore Enterprise on Amazon EKS

This section provides information on how to deploy Anchore Enterprise onto Amazon EKS. Here is recommended architecture on AWS EKS:

login

Prerequisites

You’ll need a running Amazon EKS cluster with worker nodes. See EKS Documentation for more information on this setup.

Once you have an EKS cluster up and running with worker nodes launched, you can verify it using the following command:

$ kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
ip-192-168-2-164.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-35-43.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-55-228.ec2.internal   Ready    <none>   10m   v1.14.6-eks-5047ed

In order to deploy the Anchore Enterprise services, you’ll then need the Helm client installed on local host.

Deployment via Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process.

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore. The following is intended to cover the recommended changes for successfully deploying Anchore Enterprise on Amazon EKS.

Configurations

The following configurations should be used when deploying on EKS.

RDS

Anchore recommends utilizing Amazon RDS for a managed database service, rather than the Anchore chart-managed postgres. For information on how to configure for an external RDS database, see Amazon RDS.

S3 Object Storage

Anchore supports the use of S3 object storage for archival of SBOMs, configuration details can be found here. Consider using the iamauto: True option to utilise IAM roles for access to S3.

PVCs

Anchore by default uses ephemeral storage for pods but we recommend configuring Analyzer scratch space, at a minimum. Further details can be found here.

Anchore generally recommends providing EBS-backed storage for analyzer scratch of the gp3 type. Note that you will need to follow the AWS guide on storing K8s volumes with Amazon EBS. Once the CSI driver is configured for your cluster, you will then need to configure your helm chart with values similar to this:

analyzer:   
  scratchVolume:
      details:
        ephemeral:
          volumeClaimTemplate:
            metadata: {}
            spec:
              accessModes:
              - ReadWriteOnce
              resources:
                requests:
                  # must be 3xANCHORE_MAX_COMPRESSED_IMAGE_SIZE_MB + analyser_cache_size
                  storage: 100Gi
              # this would refer to whatever your storage class was named
              storageClassName: "gp3"

Ingress

Anchore recommends using the AWS load balancer controller for ingress.

Here is a sample manifest for use with the AWS LBC ingress:

ingress:
  enabled: true
  apiPaths:
    - /v2/
    - /version/
  uiPath: /
  ingressClassName: alb
  annotations:
    # See https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/guide/ingress/annotations.md for further customization of annotations
    alb.ingress.kubernetes.io/scheme: internet-facing
  # If you do not plan to bring your own hostname (i.e. use the AWS supplied CNAME for the load balancer) then you can leave apiHosts & uiHosts as empty lists:
  apiHosts: []
  uiHosts: []
  # If you plan to bring your own hostname then you'll likely want to populate them as follows:
  # apiHosts:
  #   - anchore.mydomain.com
  # uiHosts:
  #   - anchore.mydomain.com

You must also configure/change the following from ClusterIP to NodePort:

For the Anchore API Service:

# Pod configuration for the anchore engine api service.
api:
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

For the Anchore Enterprise UI Service:

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

For users of Amazon ALB:

Users of ALB may want to align the timeout between gunicorn & ALB. The AWS ALB Connection idle timeout defaults to 60 seconds. The Anchore Helm charts have a timeout setting that defaults to 5 seconds which should be aligned with the ALB timeout setting. Sporatic HTTP 502 errors may be emitted by the ALB if the timeouts are not in alignment. Please see this reference:

anchoreConfig:
  server:
    timeout_keep_alive: 65

Install Anchore Enterprise

Deploy Anchore Enterprise by following the instructions here.

Verify Ingress

Run the following command for details on the deployed ingress resource using the ELB:

$ kubectl describe ingress
Name:             anchore-enterprise
Namespace:        default
Address:          xxxxxxx-default-anchoreen-xxxx-xxxxxxxxx.us-east-1.elb.amazonaws.com
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     
        /v2/*   anchore-enterprise-api:8228 (192.168.42.122:8228)
        /*      anchore-enterprise-ui:80 (192.168.14.212:3000)
Annotations:
  alb.ingress.kubernetes.io/scheme:  internet-facing
  kubernetes.io/ingress.class:       alb
Events:
  Type    Reason  Age   From                    Message
  ----    ------  ----  ----                    -------
  Normal  CREATE  14m   alb-ingress-controller  LoadBalancer 904f0f3b-default-anchoreen-d4c9 created, ARN: arn:aws:elasticloadbalancing:us-east-1:077257324153:loadbalancer/app/904f0f3b-default-anchoreen-d4c9/4b0e9de48f13daac
  Normal  CREATE  14m   alb-ingress-controller  rule 1 created with conditions [{    Field: "path-pattern",    Values: ["/v2/*"]  }]
  Normal  CREATE  14m   alb-ingress-controller  rule 2 created with conditions [{    Field: "path-pattern",    Values: ["/*"]  }]

The output above shows that an ELB has been created. Next, try navigating to the specified URL in a browser:

login

Verify Anchore Service Status

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://xxxxxx-default-anchoreen-xxxx-xxxxxxxxxx.us-east-1.elb.amazonaws.com ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

3.3 - Deploying Anchore Enterprise on Google Kubernetes Engine (GKE)

Get an understanding of deploying Anchore Enterprise on a Google Kubernetes Engine (GKE) cluster and exposing it on the public Internet.

Note when using Google Cloud, consider utilizing Cloud SQL for PostgreSQL as a managed database service.

Prerequisites

  • A running GKE cluster with worker nodes launched. See GKE Documentation for more information on this setup.
  • Helm client installed on local host.
  • AnchoreCTL installed on local host.

Once you have a GKE cluster up and running with worker nodes launched, you can verify it by using the followiing command.

$ kubectl get nodes
NAME                                                STATUS   ROLES    AGE   VERSION
gke-standard-cluster-1-default-pool-c04de8f1-hpk4   Ready    <none>   78s   v1.13.7-gke.24
gke-standard-cluster-1-default-pool-c04de8f1-m03k   Ready    <none>   79s   v1.13.7-gke.24
gke-standard-cluster-1-default-pool-c04de8f1-mz3q   Ready    <none>   78s   v1.13.7-gke.24

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:

  • Anchore Enterprise software
  • PostgreSQL (13 or higher)
  • Redis (4)

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore. The following is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on Google Kubernetes Engine.

Note: For this deployment, a GKE ingress controller will be used. You can read more about Kubernetes Ingress with a GKE Ingress Controller here

Configurations

Make the following changes below to your anchore_values.yaml

Ingress

ingress:
  enabled: true
  apiPaths:
    - /v2/*
  uiPath: /*

Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

Anchore API Service

api:
  replicaCount: 1
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Note: Changed the service type to NodePort

Anchore Enterprise UI

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

Note: Changed service type to NodePort.

Anchore Enterprise Deployment

Create Secrets

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.

Create a Kubernetes secret containing your license file:

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing DockerHub credentials with access to the private Anchore Enterprise software:

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Deploy Anchore Enterprise:

helm repo add anchore https://charts.anchore.io helm install anchore anchore/enterprise -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m

Run the following command for details on the deployed ingress resource:

$ kubectl describe ingress
Name:             anchore-enterprise
Namespace:        default
Address:          34.96.64.148
Default backend:  default-http-backend:80 (10.8.2.6:8080)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
        /v2/*   anchore-enterprise-api:8228 (<none>)
        /*      anchore-enterprise-ui:80 (<none>)
Annotations:
  kubernetes.io/ingress.class:            gce
  ingress.kubernetes.io/backends:         {"k8s-be-31175--55c0399dc5755377":"HEALTHY","k8s-be-31274--55c0399dc5755377":"HEALTHY","k8s-be-32037--55c0399dc5755377":"HEALTHY"}
  ingress.kubernetes.io/forwarding-rule:  k8s-fw-default-anchore-enterprise--55c0399dc5750
  ingress.kubernetes.io/target-proxy:     k8s-tp-default-anchore-enterprise--55c0399dc5750
  ingress.kubernetes.io/url-map:          k8s-um-default-anchore-enterprise--55c0399dc5750
Events:
  Type    Reason  Age   From                     Message
  ----    ------  ----  ----                     -------
  Normal  ADD     15m   loadbalancer-controller  default/anchore-enterprise
  Normal  CREATE  14m   loadbalancer-controller  ip: 34.96.64.148

The output above shows that an Load Balancer has been created. Navigate to the specified URL in a browser:

login

Anchore System

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://34.96.64.148 ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with Anchore CTL:

ANCHORECTL_URL=http://34.96.64.148 ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Vulnerability Management section of our documentation for more information.

3.4 - Deploying Anchore Enterprise on OpenShift

This document will walkthrough the deployment of Anchore Enterprise on an OpenShift Kubernetes Distribution (OKD) 3.11 cluster and expose it on the public internet.

Note: While this document walks through deploying on OKD 3.11, it has been successfully deployed and tested on OpenShift 4.2.4 and 4.2.7.

Prerequisites

  • A running OpenShift Kubernetes Distribution (OKD) 3.11 cluster. Read more about the installation requirements here.
    • Note: If deploying to a running OpenShift 4.2.4+ cluster, read more about the installation requirements here.
  • Helm client and server installed and configured with your cluster.
  • AnchoreCTL installed on local host.

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise installation of the chart will include the following:

  • Anchore Enterprise Software
  • PostgreSQL (13)
  • Redis 17

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore, this document is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on OKD 3.11.

OpenShift Configurations

Create a new project

Create a new project called anchore-enterprise:

oc new-project anchore-enterprise

Create secrets

Two secrets are required for an Anchore Enterprise deployment.

Create a secret for the license file: oc create secret generic anchore-enterprise-license --from-file=license.yaml=license.yaml

Create a secret for pulling the images: oc create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<username> --docker-password=<password> --docker-email=<email>

Verify these secrets are in the correct namespace: anchore-enterprise

oc describe secret <secret-name>

Link the above Docker registry secret to the default service account:

oc secrets link default anchore-enterprise-pullcreds --for=pull --namespace=anchore-enterprise

Verify this by running the following:

oc describe sa

Note: Validate your OpenShift SCC. Based on the security constraints of your environment, you may need to change SCC. oc adm policy add-scc-to-user anyuid -z default

Anchore Configurations

Create a custom anchore_values.yaml file for your Anchore Enterprise deployment:

# NOTE: This is not a production ready values file for an openshift deployment.

securityContext:
  fsGroup: null
  runAsGroup: null
  runAsUser: null
postgresql:
  primary:
    containerSecurityContext:
      enabled: false
    podSecurityContext:
      enabled: false
ui-redis:
  master:
    podSecurityContext:
      enabled: false
    containerSecurityContext:
      enabled: false

Install software

Run the following command to install the software:

helm repo add anchore https://charts.anchore.io helm install anchore -f values.yaml anchore/enterprise

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running oc get pods:

$ oc get pods
NAME                                                              READY     STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           1/1     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 1/1     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-enterprise-datasyncer-585997576d-2fgkg                    1/1     Running   0          13m
anchore-enterprise-reportsworker-6fb4f55455-f2ts2                 1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m

Create route objects

Create two route object in the OpenShift console to expose the UI and API services on the public internet:

Note: Route configuration is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

API Route

api-config

UI Route

ui-config

Routes

routes

Verify by navigating to the anchore-enterprise-ui route hostname:

ui

Anchore System

First you will need to retrieve the admin password. This is stored as a secret during the helm install process

oc get secret anchore-enterprise-env -o jsonpath='{.data.ANCHORE_ADMIN_PASSWORD}' -n anchore  | base64 -d

You can use customize your helm values.yaml file to use an existing / custom secrets rather than have help generate one for you with a generated password.

Verify API route hostname with AnchoreCTL:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://anchore-engine-anchore-enterprise.apps.54.84.147.202.nip.io \
ANCHORECTL_USERNAME=admin \
ANCHORECTL_PASSWORD=foobar \
anchorectl system status

#### Anchore Vulnerability data

Anchore has a datasyncer service that pulls the vulnerability and other data sources such as ClamAV malware database into your Anchore deployment. You can check on the status of these feed data using AnchoreCTL:

```shell
ANCHORECTL_URL=http://anchore-engine-anchore-enterprise.apps.54.84.147.202.nip.io \
ANCHORECTL_USERNAME=admin \
ANCHORECTL_PASSWORD=foobar \
anchorectl feed list

Note: Please continue to the Vulnerability Management section of our documentation for more information about Vulnerability Management within Anchore.

4 - Anchore Enterprise Cloud Image

Overview

The Anchore Enterprise Cloud Image is a fully functional machine image with an Anchore Enterprise deployment that is pre-configured with the goal of simplifying deployment complexity for our end users.

The Cloud Image is currently available for our Amazon users. Anchore Enterprise Cloud Image - AWS

Cloud Image Manager

The Cloud Image Manager is a proprietary tool that is pre-packaged in the cloud image. It allows users to manage their Anchore Enterprise Cloud Image deployments by walking users through the process of installing, configuring, and upgrading. For more details please see Cloud Image Manager.

Support Limits

The Cloud Image has the following limits, independent of instance type:

  • 10,000 Image SBOMs
  • Max Image Size is 10 GB
  • 300 Report Executions
  • 100 System Users
  • 2 - 8 accounts per deployment depending on your Purchased Tier.

Non-supported Features

The Cloud Image does not currently support the following Anchore Enterprise features:

  • Runtime Inventory
  • Application Groups and Source Code Analysis
  • Windows Image Analysis
  • Legacy Image Archive

4.1 - Enterprise Cloud Image - Amazon Machine Image (AMI)

Overview

Anchore Enterprise Cloud Image is a fully functional Anchore Enterprise deployment that is pre-configured and ready to use. The cloud image is currently available for our Amazon users. For general information on the Amazon Machine Images (AMI) and how to use them, see the Amazon EC2 documentation.

The Anchore Enterprise Cloud Image Manager is shipped as part of the AMI to aid in the installation, configuration, and management of the Anchore Enterprise Cloud Image. For more information about the Cloud Image Manager, see the Cloud Image Manager.

Recommendations and Requirements

The following are requirements and recommended best practices for deploying the Anchore Enterprise Cloud Image in AWS.

  • Memory Requirement - The Cloud Image requires a minimum of 32 GB of memory to operate.
  • Disk Requirement - The Cloud Image requires a minimum of 128 GB of disk space for root volume and 1 TB for data volume to operate.
    • Note: The data volume by default will not delete on termination of your AMI.
  • CPU Requirement - The Cloud Image requires a minimum of 4 vCPU to operate.

AWS Supported Instance Type

The baseline supported instance type on Amazon Web Services is the r7a.xlarge. This gives the best mix of performance to cost for running Anchore Enterprise.

The Cloud Image Manager will not enforce the use of this instance type but will check for the minimum resources needed to run the software. If you would like to use a different instance type, please contact Anchore Support for guidance.

For more information on AWS Instance Types Please review the following links

Key pair type

The Anchore Enterprise Cloud Image is running with FIPS enabled. When creating your Key Pair, you must use an RSA key. The ED25519 key will be rejected as a non-FIPS-compliant algorithm.

Please review the AWS documentation on using Amazon EC2 Key Pairs

Security Group

The Anchore Enterprise Cloud Image requires the following ports to be open in the security group:

  • TCP 22 - SSH
  • TCP 443 - HTTPS
  • TCP 8443 - Grafana

Please review the AWS documentation on Security Groups.

Cloud Image Manager Terminals

Please review the Best Practices for the Cloud Image Manager for the recommended terminal applications to use.

Anchore Cloud Image License

The Anchore Enterprise Cloud Image requires a valid license to operate. The license is provided by Anchore during the purchase process. The license file is required to be uploaded via the Cloud Image Manager during the initial setup.
Please have it available before starting the installation process.

Launching the AMI

To launch the Anchore Enterprise Cloud Image AMI, please refer to the AWS documentation on Launch an Amazon EC2 instance.

You may also want to review the AWS guide for how to Connect to your EC2 instance.

Once the instance is launched, please review the Cloud Image Manager documentation for the next steps on Accessing the Cloud Image Manager. The Cloud Image Manager will walk you through the preflight checks, configuration, and management of your Anchore Enterprise Cloud Image deployment.

Backup and Restore

It is important that you have a backup and restore strategy in place to protect your data. The Anchore Enterprise Cloud Image Manager will prompt you to create a snapshot prior to upgrading your Anchore Enterprise Cloud Image or expanding your disks. It is also reasonable for you to create a snapshot of your EBS volume on a regular basis.

Please refer to the AWS documentation on AWS Backup and Amazon EBS Snapshots.

Expanding your disks

During the course of using the product, you may wish to expand the size of your disks. It is strongly recommended that you create a snapshot of your EBS volume prior to expanding your disks.

Please refer to the AWS documentation on Extend or modify disk volume

Once you have expanded your disk, you will need to resize the filesystem to take advantage of the additional space. The Cloud Image Manager provides a utility to resize the filesystem. Please refer to the Cloud Image Manager Configuration Disk Expansion for more information.

Upgrading the Cloud Image

Occasionally, Anchore will release updates to the Anchore Enterprise Cloud Image. The Cloud Image Manager will provide you with the upgrades that are available to you and allow you to determine when you want to upgrade. It is strongly recommended that you create a snapshot of your EBS volume prior to upgrading your Anchore Enterprise Cloud Image.

Please refer to the Cloud Image Manager upgrade documentation for more information.

Support for your Cloud Image

During operation of Anchore Enterprise or the Cloud Image, you may require support from Anchore Support. The Cloud Image Manager provides you with a seamless way to generate a support bundle and upload it to Anchore Support.

Please refer to the Cloud Image Manager Support documentation for more information.

4.2 - Anchore Enterprise Cloud Image Manager

Overview

The Cloud Image Manager is a proprietary tool that allows users to seamlessly manage their Anchore Enterprise Cloud Image deployments. It walks users through the process of installing, configuring, and upgrading their Anchore Enterprise Cloud Image deployment.

Best Practices

The Cloud Image Manager uses Textual (a TUI framework for Python) to provide a terminal-based interface. For your best user experience, please use the following terminal emulators when connecting to the Cloud Image Manager.

Note: We recommend against using the default macOS Terminal application as it may not render the TUI correctly. For more information on why, please see Textual FAQ.

Accessing the Cloud Image Manager

After your instance is launched, you can access the Cloud Image Manager by connecting to the instance via SSH. Using your private key file used for authentication (likely generated when setting up the instance) and the public IP address of the instance, connect using the following example command:

ssh -i ~/my-keypair.pem [email protected]

Potential Issues

  1. Permissions on key file - If you get a WARNING: UNPROTECTED PRIVATE KEY FILE error, fix it by setting the correct permissions on your key file. Run the following command to set the correct permissions:

    chmod 400 ~/my-keypair.pem
    
  2. Connection Issues - If you experience a Connection Timeout or Host Unreachable error, verify that the instance is running and that the security group allows SSH traffic on port 22.

You should now be connected to the Cloud Image Manager.

Welcome

Preflight Checks

The Cloud Image Manager will perform a series of preflight checks to ensure that the system is ready for installation. These checks include ensuring that the machine image has met memory, disk space, and CPU requirements. If the system does not meet the requirements, the preflight checks will fail and the installation will not proceed.

Initial Install

The Cloud Image Manager will walk you through the initial installation process. At the end of this process, the Cloud Image Manager will provide you with the URL to access the Anchore Enterprise UI as well as your administrator credentials.

Upgrade

The Cloud Image Manager will determine if there are any upgrades available for your Anchore Enterprise Cloud Image deployment. If an upgrade is available, the Cloud Image Manager will walk you through the upgrade process. If downtime is required, the Cloud Image Manager will notify you prior to proceeding. This will allow you to plan for the upgrade when it is convenient for you. It is highly recommend that you take a snapshot of your EBS volume prior to upgrade.

Configuration

The Cloud Image Manager configuration screen allows the following options:

  • Adding and updating the Anchore Enterprise License.
  • Providing any Server Certificates required for TLS access to Anchore Enterprise services.
  • Providing a custom Root Certificate if one is required for your environment.
  • Configuring any optional proxy settings required for your environment.
  • Disk Expansion
Re-configuring Proxy Settings

Changing Proxy settings after completing the installation process currently requires manual intervention for the settings to be fully applied. If you must change the Proxy settings, please contact customer support for assistance.

Expanding Disks

The Cloud Image Manager provides a utility to expand the root and data volumes once your virtual hard disk has been increased in size. This step is necessary to take advantage of the additional space. The Cloud Image Manager will shut down Anchore Enterprise during this operation. It is highly recommend that you take a snapshot of your EBS volume prior to any operation that may modify your disk volumes.

System Status

The Cloud Image Manager provides a system status screen that shows the current service and container status of the Anchore Enterprise services. It also provides the list of currently deployed versions of Anchore Enterprise, Anchore Enterprise UI as well as the other infrastructure components that are automatically deployed within the Anchore Enterprise Cloud Image.

System Status

Support

The Cloud Image Manager provides a support screen that allows you to:

  • Generate a support bundle. This will result with the location of the support bundle.
  • Upload a generated support bundle. This will be automatically uploaded to Anchore. You must create a support ticket and provide the Support Bundle ID and Filename to the support team.
  • As part of the Cloud Image deployment, you have access to Grafana data that is collected for your deployment. This data can be used to monitor the health of your deployment. The Cloud Image Manager provides a link and credentials to access the Grafana dashboard.

Support

5 - Deploying AnchoreCTL

In this section you will learn how to deploy and configure AnchoreCTL, the Anchore Enterprise Command Line Interface.

AnchoreCTL is published as a simple binary available for download either from your Anchore Enterprise deployment or Anchore’s release site.

Using AnchoreCTL, you can manage and inspect all aspects of your Anchore Enterprise deployments, either as a manual human-readable configuration/instrumentation/control tool or as a CLI that is designed to be used in scripted environments such as CI/CD and other automation environments.

Installation

AnchoreCTL’s major and minor release version coincides with the release version of Anchore Enterprise, however patch versions may differ. For example,

  • Enterprise v5.16.0
  • AnchoreCTL v5.16.0

Important It is highly recommended that the version of AnchoreCTL you are using is supported by the deployed version of Enterprise. Please refer to the Enterprise Release Notes for the supported version of AnchoreCTL. See Local examples below where anchorectl can be downloaded from your Anchore Enterprise deployment.

MacOS / Linux

Download a local (from your Anchore deployment) or remote (from Anchore servers) version without installation:

Linux Intel/AMD64

# Local
curl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=linux&architecture=amd64" -H "accept: */*"
# Remote
curl -o anchorectl.tar.gz https://anchorectl-releases.anchore.io/anchorectl/v5.16.0/anchorectl_5.16.0_linux_amd64.tar.gz

MacOS Intel/AMD64

# Local
curl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=darwin&architecture=amd64" -H "accept: */*"
# Remote
curl -o anchorectl.tar.gz https://anchorectl-releases.anchore.io/anchorectl/v5.16.0/anchorectl_5.16.0_darwin_amd64.tar.gz

MacOS ARM/M-Series

# Local
curl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=darwin&architecture=arm64" -H "accept: */*"
# Remote
curl -o anchorectl.tar.gz https://anchorectl-releases.anchore.io/anchorectl/v5.16.0/anchorectl_5.16.0_darwin_arm64.tar.gz

Windows

For windows, you must specify the version of AnchoreCTL to download if using a script.

# Local
curl -X GET "https://my-anchore.example.com/v2/system/anchorectl?operating_system=windows&architecture=amd64" -H "accept: */*"
# Remote
curl -o anchorectl.zip https://anchorectl-releases.anchore.io/anchorectl/v5.16.0/anchorectl_5.16.0_windows_amd64.zip

Installing a specific AnchoreCTL version

# Replace <DESTINATION_DIR> with /usr/local/bin (for example)
curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v5.16.0

Configuration

Once AnchoreCTL has been installed, learn about AnchoreCTL Configuration.

6 - Anchore Enterprise in an Air-Gapped Environment

Anchore Enterprise can run in an isolated environment with no outside internet connectivity. It does require a network connection to its own components and should be able to reach registries (Docker v2 API compatible) where the images to be analyzed are hosted.

Installation

Air-gapped deployment follows the standard deployment procedure for either Docker Compose or Kubernetes with Helm.

Data Synchronization

To ensure that the Anchore Enterprise installation has up-to-date vulnerability data from the vulnerability sources, you will need to periodically download and import feed data into your Anchore Enterprise deployment. Details on how to do this can be found in the Air-Gapped Configuration.

For more detail regarding the Anchore Data Service, please see Anchore Data Service.

6.1 - Anchore Enterprise in an Air-Gapped Environment

Once you have all the required images locally, you will need to push the images to your local registry and point image location for each service to the url of the images in your registry.

We will assume we are using a Habor registry locally accessible at core.harbor.domain. Follow these steps to push the images to your local registry and deploy Anchore Enterprise:

  1. Tag images Since Docker images are currently tagged with docker.io, you need to retag them with your Harbor registry URL.

Replace core.harbor.domain with your actual registry domain:

docker tag docker.io/anchore/enterprise:v5.15.0 core.harbor.domain/anchore/enterprise:v5.15.1
docker tag docker.io/library/postgres:13 core.harbor.domain/library/postgres:13
docker tag docker.io/library/redis:7 core.harbor.domain/library/redis:7
docker tag docker.io/anchore/enterprise-ui:v5.15.0 core.harbor.domain/anchore/enterprise-ui:v5.15.0
  1. Push the Tagged Images to Harbor
docker push core.harbor.domain/anchore/enterprise:v5.15.0
docker push core.harbor.domain/library/postgres:13
docker push core.harbor.domain/library/redis:7
docker push core.harbor.domain/anchore/enterprise-ui:v5.15.0

Once all the required images are the private registry, you will then need to point all Anchore images in the docker-compose.yaml file to it.

In this example, I have replace all docker.io to core.harbor.domain:

services:
  # The primary API endpoint service
  api:
    image: docker.io/anchore/enterprise:v5.15.0
    depends_on:
      anchore-db:
        condition: service_healthy
      catalog:
        condition: service_healthy

To:

services:
  # The primary API endpoint service
  api:
    image: core.harbor.domain/anchore/enterprise:v5.15.0
    depends_on:
      anchore-db:
        condition: service_healthy
      catalog:
        condition: service_healthy

Do this for all services as we will be deploying anchore from your private repository and not docker.io

Also, do not forget to set ANCHORE_DATA_SYNC_AUTO_SYNC_ENABLED to false in the dataSyncer service.

dataSyncer:
  extraEnv:
    - name: ANCHORE_DATA_SYNC_AUTO_SYNC_ENABLED
      value: "false"
  1. With your license file and docker-compose.yaml file in the active directory, execute the following to deploy Anchore Enterprise in your air-gapped environment
docker compose up -d

6.2 - Anchore Enterprise in an Air-Gapped Environment

Download images locally

Follow these steps to manually transfer the images and deploy Anchore Enterprise on Docker.

  1. Download Images from a System with Internet Access On a machine that has internet access, pull all the relevant Anchore images: We will assume the latest Anchore Enterprise version is v5.15, so we will be pulling down these images (make sure to pull current version as needed)
docker pull docker.io/anchore/enterprise:v5.15.0
docker pull docker.io/library/postgres:13
docker pull docker.io/library/redis:7
docker pull docker.io/anchore/enterprise-ui:v5.15.0
  1. Save Images as Tar Files Once the images are pulled, save them as a tarball so that they can be transferred to the air-gapped system. Run the following command:
docker save -o anchore_images.tar \
    docker.io/anchore/enterprise:v5.15.0 \
    docker.io/library/postgres:13 \
    docker.io/library/redis:7 \
    docker.io/anchore/enterprise-ui:v5.15.0

This command will create a tar file (approx. 2.2GB in size) containing all the pulled images.

  1. Transfer Images to the Air-Gapped Environment Now, transfer the anchore_images.tar file (via a memory stick or other means) to the air-gapped system.

  2. Load the Images onto the Air-Gapped System On the air-gapped system, load the images from the tarball using the following command:

docker load -i anchore_images.tar

You can verify that the images have been loaded by running:

docker images

Deploy Anchore on the Air-Gapped System

Once the images are available on the offline system, you can proceed with the deployment using docker-compose.

  1. Download the Docker Compose File On a system with internet access, download the official Docker Compose file for Anchore:

curl https://docs.anchore.com/5.15/docs/deployment/docker_compose/docker-compose.yaml > docker-compose.yaml

Transfer this file to your offline system (using a memory stick or similar method).

  1. Set Up and Deploy On the air-gapped system, place the downloaded docker-compose.yaml file in your working directory, along with your license file. Make sure the docker-compose.yaml file references the images by name and tag exactly as they appear on your local system.

Now, you can deploy Anchore with:

docker compose up -d

Docker will automatically use the locally loaded images if they exist with the correct name and tag, as referenced in the docker-compose.yaml file.

Installing via Helm