This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Deploying Anchore Enterprise

Anchore Enterprise and its components are delivered as Docker container images which can be deployed as co-located, fully distributed, or anything in-between. As such, it can scale out to increase analysis throughput. The only external system required is a PostgreSQL database (13.0 or higher) that all services connect to, but do not use for communication beyond some very simple service registration/lookup processes. The database is centralized simply for ease of management and operation. For more information on the architecture, go to Anchore Enterprise Architecture.

Jump to the following installation guides of your choosing:

1 - Deploy using Docker Compose

In this topic, you’ll learn how to use Docker Compose to get up and running with a stand-alone Anchore Enterprise deployment for trial, demonstration, and review purposes only.

Note: If you would like to gain a deeper understanding of Anchore and its concepts, review the Overview topic prior to deploying of Anchore Enterprise.

Configuration Files for Docker Compose:

Requirements

The following instructions assume you are using a system running Docker v1.12 or higher, and a version of Docker Compose that supports at least v2 of the docker-compose configuration format.

  • A stand-alone deployment requires at least 4GB of RAM, and enough disk space available to support the largest container images or source repositories that you intend to analyze. It is recommended to consider three times the largest source repository or container image size. For small testing, like basic Linux distro images or database images, between 5GB and 10GB of disk space should be sufficient.
  • To access Anchore Enterprise, you need a valid license.yaml file that has been issued to you by Anchore. If you do not have a license yet, visit the Anchore Contact page to request one.

Step 1: Ensure you can authenticate to DockerHub to pull the images

You’ll need authenticated access to the anchore/enterprise and anchore/enterprise-ui repositories on DockerHub. Anchore support should have granted your DockerHub user access when you received your license.

# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: <your_dockerhub_account>
Password: <your_dockerhub_password>

Step 2: Download compose, copy license, and start.

Now, ensure the license.yaml file you got from Anchore Sales/Support is in the directory where you want to run the containers from, then download the compose file and start it. You can use the link at the top of this page, or use curl or wget to download it as shown in the following example.

# cp <path/to/your/license.yaml> ./license.yaml
# curl https://docs.anchore.com/current/docs/deployment/docker_compose/docker-compose.yaml > docker-compose.yaml
# docker-compose up -d

Step 3: Install AnchoreCTL

Next, we’ll install the lightweight Anchore Enterprise client tool, quickly test using the version operation, and set up a few environment variables to allow it to interact with your quickstart deployment using the following process:

# curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b /usr/local/bin v5.0.0

# ./anchorectl version
Application:        anchorectl
Version:            5.0.0
SyftVersion:        v0.90.0
BuildDate:          2023-10-19T22:09:54Z
GitCommit:          f7604438b45f7161c11145999897d4ae3efcb0c8
GitDescription:     v5.0.0
Platform:           linux/amd64
GoVersion:          go1.21.1
Compiler:           gc

# export ANCHORECTL_URL="http://localhost:8228"
# export ANCHORECTL_USERNAME="admin"
# export ANCHORECTL_PASSWORD="foobar"

NOTE: for this quickstart, we’re installing the tool in your local directory ./ and will be using environment variables throughout. To more permanently install and configure anchorectl to remove the need for setting environment variables and putting the tool in a globally accessible path, see Installing AnchoreCTL.

Step 4: Verify service availability

After a few minutes (depending on system speed) Anchore Enterprise and Anchore UI services should be up and running, ready to use. You can verify the containers are running with docker-compose, as shown in the following example.

# docker-compose ps
             Name                           Command                  State               Ports         
-------------------------------------------------------------------------------------------------------
anchorequickstart_analyzer_1          /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchorequickstart_anchore-db_1        docker-entrypoint.sh postgres    Up             5432/tcp              
anchorequickstart_api_1               /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8228->8228/tcp
anchorequickstart_catalog_1           /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchorequickstart_notifications_1     /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8668->8228/tcp
anchorequickstart_policy-engine_1     /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchorequickstart_queue_1             /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchorequickstart_rbac-authorizer_1   /docker-entrypoint.sh anch ...   Up (healthy)   8089/tcp, 8228/tcp    
anchorequickstart_rbac-manager_1      /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8229->8228/tcp
anchorequickstart_reports_1           /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8558->8228/tcp
anchorequickstart_reports_worker_1    /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:55427->8228/tcp
anchorequickstart_ui-redis_1          docker-entrypoint.sh redis ...   Up             6379/tcp              
anchorequickstart_ui_1                /docker-entrypoint.sh node ...   Up             0.0.0.0:3000->3000/tcp

You can then run a command to get the status of the Anchore Enterprise services:


# ./anchorectl system status
 ✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 500        │ 5.0.0        │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 500        │ 5.0.0        │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 500        │ 5.0.0        │
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 500        │ 5.0.0        │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 500        │ 5.0.0        │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 500        │ 5.0.0        │
│ rbac_manager    │ anchore-quickstart │ http://rbac-manager:8228    │ true │ available      │ 500        │ 5.0.0        │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 500        │ 5.0.0        │
│ rbac_authorizer │ anchore-quickstart │ http://rbac-authorizer:8228 │ true │ available      │ 500        │ 5.0.0        │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 500        │ 5.0.0        │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

Note: The first time you run Anchore Enterprise, vulnerability data will sync to the system in a few minutes. If the on-prem feed service is also used, it will take a while for the vulnerability data to get synced into the system (two plus hours in many cases, depending on network speed). For the best experience, wait until the core vulnerability data feeds have completed before proceeding. You can check the status of your feed sync using AnchoreCTL:

# ./anchorectl feed list
 ✔ List feed
┌─────────────────┬─────────────────┬─────────┬──────────────────────┬──────────────┐
│ FEED            │ GROUP           │ ENABLED │ LAST SYNC            │ RECORD COUNT │
├─────────────────┼─────────────────┼─────────┼──────────────────────┼──────────────┤
│ vulnerabilities │ alpine:3.10     │ true    │ 2022-08-26T14:08:51Z │ 2331         │
│ vulnerabilities │ alpine:3.11     │ true    │ 2022-08-26T14:08:51Z │ 2665         │
│ vulnerabilities │ alpine:3.12     │ true    │ 2022-08-26T14:08:51Z │ 3205         │
│ vulnerabilities │ alpine:3.13     │ true    │ 2022-08-26T14:08:51Z │ 3656         │
│ vulnerabilities │ alpine:3.14     │ true    │ 2022-08-26T14:08:51Z │ 4097         │
│ vulnerabilities │ alpine:3.15     │ true    │ 2022-08-26T14:08:51Z │ 4479         │
│ vulnerabilities │ alpine:3.16     │ true    │ 2022-08-26T14:08:51Z │ 4763         │
│ vulnerabilities │ alpine:3.2      │ true    │ 2022-08-26T14:08:51Z │ 306          │
│ vulnerabilities │ alpine:3.3      │ true    │ 2022-08-26T14:08:51Z │ 471          │
│ vulnerabilities │ alpine:3.4      │ true    │ 2022-08-26T14:08:51Z │ 683          │
│ vulnerabilities │ alpine:3.5      │ true    │ 2022-08-26T14:08:51Z │ 903          │
│ vulnerabilities │ alpine:3.6      │ true    │ 2022-08-26T14:08:51Z │ 1077         │
│ vulnerabilities │ alpine:3.7      │ true    │ 2022-08-26T14:08:51Z │ 1462         │
│ vulnerabilities │ alpine:3.8      │ true    │ 2022-08-26T14:08:51Z │ 1675         │
│ vulnerabilities │ alpine:3.9      │ true    │ 2022-08-26T14:08:51Z │ 1962         │
│ vulnerabilities │ amzn:2          │ true    │ 2022-08-26T14:08:51Z │ 925          │
│ vulnerabilities │ amzn:2022       │ true    │ 2022-08-26T14:08:51Z │ 124          │
│ vulnerabilities │ debian:10       │ true    │ 2022-08-26T14:08:51Z │ 28893        │
│ vulnerabilities │ debian:11       │ true    │ 2022-08-26T14:08:51Z │ 26431        │
│ vulnerabilities │ debian:12       │ true    │ 2022-08-26T14:08:51Z │ 25660        │
│ vulnerabilities │ debian:7        │ true    │ 2022-08-26T14:08:51Z │ 20455        │
│ vulnerabilities │ debian:8        │ true    │ 2022-08-26T14:08:51Z │ 24058        │
│ vulnerabilities │ debian:9        │ true    │ 2022-08-26T14:08:51Z │ 28240        │
│ vulnerabilities │ debian:unstable │ true    │ 2022-08-26T14:08:51Z │ 31740        │
│ vulnerabilities │ github:composer │ true    │ 2022-08-26T14:08:51Z │ 1000         │
│ vulnerabilities │ github:gem      │ true    │ 2022-08-26T14:08:51Z │ 473          │
│ vulnerabilities │ github:go       │ true    │ 2022-08-26T14:08:51Z │ 566          │
│ vulnerabilities │ github:java     │ true    │ 2022-08-26T14:08:51Z │ 2057         │
│ vulnerabilities │ github:npm      │ true    │ 2022-08-26T14:08:51Z │ 2585         │
│ vulnerabilities │ github:nuget    │ true    │ 2022-08-26T14:08:51Z │ 216          │
│ vulnerabilities │ github:python   │ true    │ 2022-08-26T14:08:51Z │ 1244         │
│ vulnerabilities │ github:rust     │ true    │ 2022-08-26T14:08:51Z │ 289          │
│ vulnerabilities │ nvd             │ true    │ 2022-08-26T14:08:51Z │ 193942       │
│ vulnerabilities │ ol:5            │ true    │ 2022-08-26T14:08:51Z │ 1255         │
│ vulnerabilities │ ol:6            │ true    │ 2022-08-26T14:08:51Z │ 1666         │
│ vulnerabilities │ ol:7            │ true    │ 2022-08-26T14:08:51Z │ 1837         │
│ vulnerabilities │ ol:8            │ true    │ 2022-08-26T14:08:51Z │ 1028         │
│ vulnerabilities │ ol:9            │ true    │ 2022-08-26T14:08:51Z │ 56           │
│ vulnerabilities │ rhel:5          │ true    │ 2022-08-26T14:08:51Z │ 7827         │
│ vulnerabilities │ rhel:6          │ true    │ 2022-08-26T14:08:51Z │ 8352         │
│ vulnerabilities │ rhel:7          │ true    │ 2022-08-26T14:08:51Z │ 7847         │
│ vulnerabilities │ rhel:8          │ true    │ 2022-08-26T14:08:51Z │ 4198         │
│ vulnerabilities │ rhel:9          │ true    │ 2022-08-26T14:08:51Z │ 1097         │
│ vulnerabilities │ sles:11         │ true    │ 2022-08-26T14:08:51Z │ 594          │
│ vulnerabilities │ sles:11.1       │ true    │ 2022-08-26T14:08:51Z │ 6125         │
│ vulnerabilities │ sles:11.2       │ true    │ 2022-08-26T14:08:51Z │ 3291         │
│ vulnerabilities │ sles:11.3       │ true    │ 2022-08-26T14:08:51Z │ 7081         │
│ vulnerabilities │ sles:11.4       │ true    │ 2022-08-26T14:08:51Z │ 6583         │
│ vulnerabilities │ sles:12         │ true    │ 2022-08-26T14:08:51Z │ 5918         │
│ vulnerabilities │ sles:12.1       │ true    │ 2022-08-26T14:08:51Z │ 6206         │
│ vulnerabilities │ sles:12.2       │ true    │ 2022-08-26T14:08:51Z │ 7625         │
│ vulnerabilities │ sles:12.3       │ true    │ 2022-08-26T14:08:51Z │ 9395         │
│ vulnerabilities │ sles:12.4       │ true    │ 2022-08-26T14:08:51Z │ 9428         │
│ vulnerabilities │ sles:12.5       │ true    │ 2022-08-26T14:08:51Z │ 9810         │
│ vulnerabilities │ sles:15         │ true    │ 2022-08-26T14:08:51Z │ 8500         │
│ vulnerabilities │ sles:15.1       │ true    │ 2022-08-26T14:08:51Z │ 8168         │
│ vulnerabilities │ sles:15.2       │ true    │ 2022-08-26T14:08:51Z │ 7684         │
│ vulnerabilities │ sles:15.3       │ true    │ 2022-08-26T14:08:51Z │ 7830         │
│ vulnerabilities │ sles:15.4       │ true    │ 2022-08-26T14:08:51Z │ 7435         │
│ vulnerabilities │ ubuntu:12.04    │ true    │ 2022-08-26T14:08:51Z │ 14963        │
│ vulnerabilities │ ubuntu:12.10    │ true    │ 2022-08-26T14:08:51Z │ 5652         │
│ vulnerabilities │ ubuntu:13.04    │ true    │ 2022-08-26T14:08:51Z │ 4127         │
│ vulnerabilities │ ubuntu:14.04    │ true    │ 2022-08-26T14:08:51Z │ 29362        │
│ vulnerabilities │ ubuntu:14.10    │ true    │ 2022-08-26T14:08:51Z │ 4456         │
│ vulnerabilities │ ubuntu:15.04    │ true    │ 2022-08-26T14:08:51Z │ 6240         │
│ vulnerabilities │ ubuntu:15.10    │ true    │ 2022-08-26T14:08:51Z │ 6513         │
│ vulnerabilities │ ubuntu:16.04    │ true    │ 2022-08-26T14:08:51Z │ 26480        │
│ vulnerabilities │ ubuntu:16.10    │ true    │ 2022-08-26T14:08:51Z │ 8647         │
│ vulnerabilities │ ubuntu:17.04    │ true    │ 2022-08-26T14:08:51Z │ 9157         │
│ vulnerabilities │ ubuntu:17.10    │ true    │ 2022-08-26T14:08:51Z │ 7943         │
│ vulnerabilities │ ubuntu:18.04    │ true    │ 2022-08-26T14:08:51Z │ 20984        │
│ vulnerabilities │ ubuntu:18.10    │ true    │ 2022-08-26T14:08:51Z │ 8400         │
│ vulnerabilities │ ubuntu:19.04    │ true    │ 2022-08-26T14:08:51Z │ 8669         │
│ vulnerabilities │ ubuntu:19.10    │ true    │ 2022-08-26T14:08:51Z │ 8431         │
│ vulnerabilities │ ubuntu:20.04    │ true    │ 2022-08-26T14:08:51Z │ 14810        │
│ vulnerabilities │ ubuntu:20.10    │ true    │ 2022-08-26T14:08:51Z │ 9996         │
│ vulnerabilities │ ubuntu:21.04    │ true    │ 2022-08-26T14:08:51Z │ 11343        │
│ vulnerabilities │ ubuntu:21.10    │ true    │ 2022-08-26T14:08:51Z │ 12673        │
│ vulnerabilities │ ubuntu:22.04    │ true    │ 2022-08-26T14:08:51Z │ 12992        │
└─────────────────┴─────────────────┴─────────┴──────────────────────┴──────────────┘

As soon as you see RecordCount values set for all vulnerability groups, the system is fully populated and ready to present vulnerability results. Note that feed syncs are incremental, so the next time you start up Anchore Enterprise it will be ready immediately. The AnchoreCTL includes a useful utility that will block until the feeds have completed a successful sync:


# ./anchorectl system wait
 ✔ API available                                                                                        system
 ✔ Services available                        [10 up]                                                    system
 ✔ Vulnerabilities feed ready                                                                           system

Step 4: Start using Anchore

To get started, you can add a few images to Anchore Enterprise using AnchoreCTL. Once complete, you can also run an additional AnchoreCTL command to monitor the analysis state of the added images, waiting until the images move into an ‘analyzed’ state.

# ./anchorectl image add docker.io/library/alpine:latest
 ✔ Added Image                                                                                                              docker.io/library/alpine:latest
Image:
  status:           not-analyzed (active)
  tag:              docker.io/library/alpine:latest
  digest:           sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870
  id:               9c6f0724472873bb50a2ae67a9e7adcb57673a183cea8b06eb778dca859181b5

# ./anchorectl image add docker.io/library/nginx:latest
 ✔ Added Image                                                                                                              docker.io/library/nginx:latest
Image:
  status:           not-analyzed (active)
  tag:              docker.io/library/nginx:latest
  digest:           sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
  id:               2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
  distro:           debian@11 (amd64)
  layers:           6

# ./anchorectl image list
 ✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────────┬────────┐
│ TAG                                                   │ DIGEST                                                                  │ ANALYSIS     │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────────┼────────┤
│ docker.io/library/alpine:latest                       │ sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870 │ analyzed     │ active │
│ docker.io/library/nginx:latest                        │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ not_analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────────┴────────┘

# ./anchorectl image add docker.io/library/nginx:latest --force --wait
 ⠏ Adding Image                                                                                                              docker.io/library/nginx:latest
 ⠼ Analyzing Image                           [analyzing]                                                                     docker.io/library/nginx:latest
...
...
 ✔ Analyzed Image                                                                                                            docker.io/library/nginx:latest
Image:
  status:           analyzed (active)
  tags:             docker.io/library/nginx:latest
  digest:           sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc
  id:               2b7d6430f78d432f89109b29d88d4c36c868cdbf15dc31d2132ceaa02b993763
  distro:           debian@11 (amd64)
  layers:           6

# ./anchorectl image list
 ✔ Fetched images
┌───────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────────────┬──────────┬────────┐
│ TAG                                                   │ DIGEST                                                                  │ ANALYSIS │ STATUS │
├───────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────────────┼──────────┼────────┤
│ docker.io/library/alpine:latest                       │ sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870 │ analyzed │ active │
│ docker.io/library/nginx:latest                        │ sha256:89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc │ analyzed │ active │
└───────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────┴──────────┴────────┘

Now that some images are in place, you can point your browser at the Anchore Enterprise UI by directing it to http://localhost:3000/.

Enter the username admin and password foobar to log in. These are some of the features you can use in the browser:

  • Navigate images
  • Inspect image contents
  • Perform security scans
  • Review compliance policy evaluations
  • Edit compliance policies with a complete policy editor UI
  • Manage accounts, users, and RBAC assignments
  • Review system events

Note: This document is intended to serve as a quickstart guide. Before moving further with Anchore Enterprise, it is highly recommended to read the Overview sections to gain a deeper understanding of fundamentals, concepts, and proper usage.

Enable Microsoft Windows Image Support

To enable scanning of Microsoft Windows images, you’ll have to configure the system to deploy a feed service and set up the proper drivers to collect vulnerability data for Microsoft Windows.

For more information, see: Enable Microsoft Windows Scanning.

Next Steps

Now that you have Anchore Enterprise running, you can begin to learn more about Anchore capabilities, architecture, concepts, and more.

Optional: Enabling Prometheus Monitoring

  1. Uncomment the following section at the bottom of the docker-compose.yaml file:

    #  # Uncomment this section to add a prometheus instance to gather metrics. This is mostly for quickstart to demonstrate prometheus metrics exported
    #  prometheus:
    #    image: docker.io/prom/prometheus:latest
    #    depends_on:
    #      - api
    #    volumes:
    #      - ./anchore-prometheus.yml:/etc/prometheus/prometheus.yml:z
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    #    ports:
    #      - "9090:9090"
    #
    
  2. For each service entry in the docker-compose.yaml, change the following to enable metrics in the API for each service

    ANCHORE_ENABLE_METRICS=false
    

    to

    ANCHORE_ENABLE_METRICS=true
    
  3. Download the example prometheus configuration into the same directory as the docker-compose.yaml file, with name anchore-prometheus.yml:

    curl https://docs.anchore.com/current/docs/quickstart/anchore-prometheus.yml > anchore-prometheus.yml
    docker compose up -d
    

    Result: You should see a new container started and can access prometheus via your browser on http://localhost:9090.

Optional: Enabling Swagger UI

  1. Uncomment the following section at the bottom of the docker-compose.yaml file:

    #  # Uncomment this section to run a swagger UI service, for inspecting and interacting with the system API via a browser (http://localhost:8080 by default, change if needed in both sections below)
    #  swagger-ui-nginx:
    #    image: docker.io/nginx:latest
    #    depends_on:
    #      - api
    #      - swagger-ui
    #    ports:
    #      - "8080:8080"
    #    volumes:
    #      - ./anchore-swaggerui-nginx.conf:/etc/nginx/nginx.conf:z
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    #  swagger-ui:
    #    image: docker.io/swaggerapi/swagger-ui
    #    environment:
    #      - URL=http://localhost:8080/v2/openapi.json
    #    logging:
    #      driver: "json-file"
    #      options:
    #        max-size: 100m
    
  2. Download the nginx configuration into the same directory as the docker-compose.yaml file, with name anchore-swaggerui-nginx.conf:

    curl https://docs.anchore.com/current/docs/deployment/anchore-swaggerui-nginx.conf > anchore-swaggerui-nginx.conf
    docker compose up -d
    

    Result: You should see a new container started, and have access Swagger UI via your browser on http://localhost:8080.

2 - Deploy on Kubernetes using Helm

The preferred method for deploying Anchore Enterprise on Kubernetes is with Helm. The Anchore Enterprise Helm Chart includes configuration options for a full Enterprise deployment.

The README in the chart repository contains more details on how to configure the Anchore Enterprise Helm chart and should always be consulted before proceeding with a deployment or upgrades.

About the Helm Chart

The chart is split into global and service specific configurations for the core features, as well as global and services specific configurations for the optional Enterprise services.

  • The anchoreConfig section of the values file contains the application configuration for Anchore Enterprise. This includes the database connection information, credentials, and other application settings.
  • Anchore services run as a kubernetes deployment when installed with the Helm chart. Each service has its own section in the values file for making customizations and configuring the kubernetes deployment spec.

For a description of each component, view the official documentation at: Anchore Enterprise Service Overview

Installing the Chart

Note: For migration steps from an Anchore Engine Helm chart deployment, refer to the Migrating to the Anchore Enterprise Helm Chart section of the chart README.

This guide covers deploying Anchore Enterprise on a Kubernetes cluster with the default configuration. Refer to the Configuration section of the chart README for additional guidance on production deployments.

  1. Create a Kubernetes Secret for License File: Generate a Kubernetes secret to store your Anchore Enterprise license file.

    export NAMESPACE=anchore
    export LICENSE_PATH="license.yaml"
    
    kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=${LICENSE_PATH} -n ${NAMESPACE}
    
  2. Create a Kubernetes Secret for DockerHub Credentials: Generate another Kubernetes secret for DockerHub credentials. These credentials should have access to private Anchore Enterprise repositories. We recommend that you create a brand new DockerHub user for these pull credentials. Contact Anchore Support to obtain access.

    export NAMESPACE=anchore
    export DOCKERHUB_PASSWORD="password"
    export DOCKERHUB_USER="username"
    export DOCKERHUB_EMAIL="[email protected]"
    
    kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=${DOCKERHUB_USER} --docker-password=${DOCKERHUB_PASSWORD} --docker-email=${DOCKERHUB_EMAIL} -n ${NAMESPACE}
    
  3. Add Chart Repository & Deploy Anchore Enterprise: Create a custom values file, named anchore_values.yaml, to override any chart parameters. Refer to the Parameters section for available options.

    Important: Default passwords are specified in the chart. It’s highly recommended to modify these before deploying.

    export NAMESPACE=anchore
    export RELEASE=my-release
    
    helm repo add anchore https://charts.anchore.io
    helm install ${RELEASE} -n ${NAMESPACE} anchore/enterprise -f anchore_values.yaml
    

    Note: This command installs Anchore Enterprise with a chart-managed PostgreSQL database, which may not be suitable for production use. See the External Database section of the chart README for details on using an external database.

  4. Post-Installation Steps: Anchore Enterprise will take some time to initialize. After the bootstrap phase, it will begin a vulnerability feed sync. Image analysis will show zero vulnerabilities, and the UI will show errors until this sync is complete. This can take several hours based on the enabled feeds. Use the following anchorectl commands to check the system status:

    export NAMESPACE=anchore
    export RELEASE=my-release
    export ANCHORECTL_URL=http://localhost:8228/v1/
    export ANCHORECTL_PASSWORD=$(kubectl get secret "${RELEASE}-enterprise" -o jsonpath='{.data.ANCHORE_ADMIN_PASSWORD}' | base64 -d -)
    
    kubectl port-forward -n ${NAMESPACE} svc/${RELEASE}-enterprise-api 8228:8228 # port forward for anchorectl in another terminal
    anchorectl system status # anchorectl defaults to the user admin, and to the password ${ANCHORECTL_PASSWORD} automatically if set
    

    Tip: List all releases using helm list

Next Steps

Now that you have Anchore Enterprise running, you can begin to learning more about Anchore Enterprise architecture, Anchore concepts, and Anchore usage.

  • To learn more about Anchore Enterprise, go to Overview
  • To learn more about Anchore Concepts, go to Concepts
  • To learn more about using Anchore Usage, go to Usage

2.1 - Deploying Anchore Enterprise on Azure Kubernetes Service (AKS)

This document will walk you through the deployment of Anchore Enterprise in an Azure Kubernetes Service (AKS) cluster and expose it on the public Internet.

Prerequisites

  • A running AKS cluster with worker nodes launched. See AKS Documentation for more information on this setup.
  • Helm client on local host.
  • AnchoreCTL installed on a local host.

Once you have an AKS cluster up and running with worker nodes launched, you can verity via the following command.

$ kubectl get nodes

NAME                       STATUS   ROLES   AGE     VERSION
aks-nodepool1-28659018-0   Ready    agent   4m13s   v1.13.10
aks-nodepool1-28659018-1   Ready    agent   4m15s   v1.13.10
aks-nodepool1-28659018-2   Ready    agent   4m6s    v1.13.10

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:

  • Anchore Enterprise software
  • PostgreSQL (13 or higher)
  • Redis (4)

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore, this document is intended to cover the minimum required changes to successfully deploy Anchore Enterprise in AKS.

Note: For this installation, an NGINX ingress controller will be used. You can read more about Kubernetes Ingress in AKS here.

Configurations

Make the following changes below to your anchore_values.yaml

Ingress

ingress:
  enabled: true
  labels: {}
  apiPaths:
    - /v2/
  uiPath: /
  annotations:
    kubernetes.io/ingress.class: nginx

Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

Anchore API Service

# Pod configuration for the anchore api service.
api:
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Note: Changed the service type to NodePort

Anchore Enterprise UI

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

Note: Changed service type to NodePort.

Install NGINX Ingress Controller

Using Helm, install an NGINX ingress controller in your AKS cluster.

helm install stable/nginx-ingress --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux

Deploy Anchore Enterprise

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.

Create a Kubernetes secret containing your license file:

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing DockerHub credentials with access to the private Anchore Enterprise software:

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Deploy Anchore Enterprise:

helm repo add anchore https://charts.anchore.io
helm install anchore anchore/enterprise -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl get pods

NAME                                                              READY   STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-rbac-manager-f69574b7d-6zqwp                   2/2     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m
mangy-serval-nginx-ingress-controller-788dd98c8b-jv2wg            1/1     Running   0          21m
mangy-serval-nginx-ingress-default-backend-8686cd585b-4m2bt       1/1     Running   0          21m

We can see that NGINX ingress controller has been installed as well from the previous step. You can view the services by running the following command:

$ kubectl get services | grep ingress

mangy-serval-nginx-ingress-controller                LoadBalancer   10.0.30.174    40.114.26.147   80:31176/TCP,443:30895/TCP                     22m
mangy-serval-nginx-ingress-default-backend           ClusterIP      10.0.243.221   <none>          80/TCP                                         22m

Note: The above output shows us that IP address of the NGINX ingress controller is 40.114.26.147. Going to this address in the browser will take us to the Anchore login page.

login

Anchore System

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://40.114.26.147/v2/ ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with AnchoreCTL:

ANCHORECTL_URL=http://40.114.26.147/v2/ ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Usage section of our documentation for more information.

2.2 - Deploying Anchore Enterprise on Amazon EKS

Get an understanding of the deployment of Anchore Enterprise on an Amazon EKS cluster and expose it on the public Internet.

Prerequisites

  • A running Amazon EKS cluster with worker nodes launched. See EKS Documentation for more information on this setup.
  • Helm client installed on local host.
  • AnchoreCTL installed on local host.

Once you have an EKS cluster up and running with worker nodes launched, you can verify it using the following command.

$ kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
ip-192-168-2-164.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-35-43.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-55-228.ec2.internal   Ready    <none>   10m   v1.14.6-eks-5047ed

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:

  • Anchore Enterprise software
  • PostgreSQL (13 or higher)
  • Redis (4)

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore. The following is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on Amazon EKS.

Note: For this installation, an ALB ingress controller will be used. You can read more about Kubernetes Ingress with AWS ALB Ingress Controller here

Configurations

Make the following changes below to your anchore_values.yaml

Ingress

ingress:
  enabled: true
  apiPaths:
    - /v2/*
  uiPath: /*
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing

Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

Anchore API Service

# Pod configuration for the anchore engine api service.
api:
  replicaCount: 1
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Note: Changed the service type to NodePort

Anchore Enterprise UI

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

Note: Changed service type to NodePort.

AWS EKS Configurations

Create the IAM policy to give the Ingress controller the right permissions

  1. Go to the IAM Console.
  2. Choose the section Roles and search for the NodeInstanceRole of your EKS worker nodes.
  3. Create and attach a policy using the contents of the template iam-policy.json

Deploy RBAC Roles and RoleBindings needed by the AWS ALB Ingress controller from the template below:

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.0/docs/examples/rbac-role.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.0/docs/examples/rbac-role.yaml

Update ALB Ingress

Download the ALB Ingress manifest and update the cluster-name section with the name of your EKS cluster name.

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.1/docs/examples/alb-ingress-controller.yaml

# Name of your cluster. Used when naming resources created
# by the ALB Ingress Controller, providing distinction between
# clusters.
- --cluster-name=anchore-prod

Deploy the AWS ALB Ingress controller YAML:

kubectl apply -f alb-ingress-controller.yaml

Anchore Enterprise Deployment

Create Secrets

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.

Create a Kubernetes secret containing your license file:

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing DockerHub credentials with access to the private Anchore Enterprise software:

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Deploy Anchore Enterprise:

helm repo add anchore https://charts.anchore.io

helm install anchore anchore/enterprise -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-rbac-manager-f69574b7d-6zqwp                   2/2     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m

Run the following command for details on the deployed ingress resource:

$ kubectl describe ingress
Name:             anchore-enterprise
Namespace:        default
Address:          xxxxxxx-default-anchoreen-xxxx-xxxxxxxxx.us-east-1.elb.amazonaws.com
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     
        /v2/*   anchore-enterprise-api:8228 (192.168.42.122:8228)
        /*      anchore-enterprise-ui:80 (192.168.14.212:3000)
Annotations:
  alb.ingress.kubernetes.io/scheme:  internet-facing
  kubernetes.io/ingress.class:       alb
Events:
  Type    Reason  Age   From                    Message
  ----    ------  ----  ----                    -------
  Normal  CREATE  14m   alb-ingress-controller  LoadBalancer 904f0f3b-default-anchoreen-d4c9 created, ARN: arn:aws:elasticloadbalancing:us-east-1:077257324153:loadbalancer/app/904f0f3b-default-anchoreen-d4c9/4b0e9de48f13daac
  Normal  CREATE  14m   alb-ingress-controller  rule 1 created with conditions [{    Field: "path-pattern",    Values: ["/v2/*"]  }]
  Normal  CREATE  14m   alb-ingress-controller  rule 2 created with conditions [{    Field: "path-pattern",    Values: ["/*"]  }]

The output above shows that an ELB has been created. Navigate to the specified URL in a browser:

login

Anchore System

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://xxxxxx-default-anchoreen-xxxx-xxxxxxxxxx.us-east-1.elb.amazonaws.com ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with AnchoreCTL:

ANCHORECTL_URL=http://xxxxxx-default-anchoreen-xxxx-xxxxxxxxxx.us-east-1.elb.amazonaws.com ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Usage section of our documentation for more information.

2.3 - Deploying Anchore Enterprise on Google Kubernetes Engine (GKE)

Get an understanding of deploying Anchore Enterprise on a Google Kubernetes Engine (GKE) cluster and exposing it on the public Internet.

Prerequisites

  • A running GKE cluster with worker nodes launched. See GKE Documentation for more information on this setup.
  • Helm client installed on local host.
  • AnchoreCTL installed on local host.

Once you have a GKE cluster up and running with worker nodes launched, you can verify it by using the followiing command.

$ kubectl get nodes
NAME                                                STATUS   ROLES    AGE   VERSION
gke-standard-cluster-1-default-pool-c04de8f1-hpk4   Ready    <none>   78s   v1.13.7-gke.24
gke-standard-cluster-1-default-pool-c04de8f1-m03k   Ready    <none>   79s   v1.13.7-gke.24
gke-standard-cluster-1-default-pool-c04de8f1-mz3q   Ready    <none>   78s   v1.13.7-gke.24

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise deployment of the chart will include the following:

  • Anchore Enterprise software
  • PostgreSQL (13 or higher)
  • Redis (4)

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore. The following is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on Google Kubernetes Engine.

Note: For this deployment, a GKE ingress controller will be used. You can read more about Kubernetes Ingress with a GKE Ingress Controller here

Configurations

Make the following changes below to your anchore_values.yaml

Ingress

ingress:
  enabled: true
  apiPaths:
    - /v2/*
  uiPath: /*

Note: Configuring ingress is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

Anchore API Service

api:
  replicaCount: 1
  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Note: Changed the service type to NodePort

Anchore Enterprise UI

ui:
  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 80
    annotations: {}
    sessionAffinity: ClientIP

Note: Changed service type to NodePort.

Anchore Enterprise Deployment

Create Secrets

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to access the private DockerHub repository containing the enterprise software.

Create a Kubernetes secret containing your license file:

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing DockerHub credentials with access to the private Anchore Enterprise software:

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Deploy Anchore Enterprise:

helm repo add anchore https://charts.anchore.io helm install anchore anchore/enterprise -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-rbac-manager-f69574b7d-6zqwp                   2/2     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m

Run the following command for details on the deployed ingress resource:

$ kubectl describe ingress
Name:             anchore-enterprise
Namespace:        default
Address:          34.96.64.148
Default backend:  default-http-backend:80 (10.8.2.6:8080)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
        /v2/*   anchore-enterprise-api:8228 (<none>)
        /*      anchore-enterprise-ui:80 (<none>)
Annotations:
  kubernetes.io/ingress.class:            gce
  ingress.kubernetes.io/backends:         {"k8s-be-31175--55c0399dc5755377":"HEALTHY","k8s-be-31274--55c0399dc5755377":"HEALTHY","k8s-be-32037--55c0399dc5755377":"HEALTHY"}
  ingress.kubernetes.io/forwarding-rule:  k8s-fw-default-anchore-enterprise--55c0399dc5750
  ingress.kubernetes.io/target-proxy:     k8s-tp-default-anchore-enterprise--55c0399dc5750
  ingress.kubernetes.io/url-map:          k8s-um-default-anchore-enterprise--55c0399dc5750
Events:
  Type    Reason  Age   From                     Message
  ----    ------  ----  ----                     -------
  Normal  ADD     15m   loadbalancer-controller  default/anchore-enterprise
  Normal  CREATE  14m   loadbalancer-controller  ip: 34.96.64.148

The output above shows that an Load Balancer has been created. Navigate to the specified URL in a browser:

login

Anchore System

Check the status of the system with AnchoreCTL to verify all of the Anchore services are up:

Note: Read more on Deploying AnchoreCTL

ANCHORECTL_URL=http://34.96.64.148 ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with Anchore CTL:

ANCHORECTL_URL=http://34.96.64.148 ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Usage section of our documentation for more information.

2.4 - Deploying Anchore Enterprise on OpenShift

This document will walkthrough the deployment of Anchore Enterprise on an OpenShift Kubernetes Distribution (OKD) 3.11 cluster and expose it on the public internet.

Note: While this document walks through deploying on OKD 3.11, it has been successfully deployed and tested on OpenShift 4.2.4 and 4.2.7.

Prerequisites

  • A running OpenShift Kubernetes Distribution (OKD) 3.11 cluster. Read more about the installation requirements here.
    • Note: If deploying to a running OpenShift 4.2.4+ cluster, read more about the installation requirements here.
  • Helm client and server installed and configured with your cluster.
  • AnchoreCTL installed on local host.

Anchore Helm Chart

Anchore maintains a Helm chart to simplify the software deployment process. An Anchore Enterprise installation of the chart will include the following:

  • Anchore Enterprise Software
  • PostgreSQL (13)
  • Redis 4

To make the necessary configurations to the Helm chart, create a custom anchore_values.yaml file and reference it during deployment. There are many options for configuration with Anchore, this document is intended to cover the minimum required changes to successfully deploy Anchore Enterprise on OKD 3.11.

OpenShift Configurations

Create a new project

Create a new project called anchore-enterprise:

oc new-project anchore-enterprise

Create secrets

Two secrets are required for an Anchore Enterprise deployment.

Create a secret for the license file: oc create secret generic anchore-enterprise-license --from-file=license.yaml=license.yaml

Create a secret for pulling the images: oc create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<username> --docker-password=<password> --docker-email=<email>

Verify these secrets are in the correct namespace: anchore-enterprise

oc describe secret <secret-name>

Link the above Docker registry secret to the default service account:

oc secrets link default anchore-enterprise-pullcreds --for=pull --namespace=anchore-enterprise

Verify this by running the following:

oc describe sa

Note: Validate your OpenShift SCC. Based on the security constraints of your environment, you may need to change SCC. oc adm policy add-scc-to-user anyuid -z default

Anchore Configurations

Create a custom anchore_values.yaml file for your Anchore Enterprise deployment:

# NOTE: This is not a production ready values file for an openshift deployment.

securityContext:
  fsGroup: null
  runAsGroup: null
  runAsUser: null
feeds:
  securityContext:
    fsGroup: null
    runAsGroup: null
    runAsUser: null
  feeds-db:
    primary:
      containerSecurityContext:
        enabled: false
      podSecurityContext:
        enabled: false
postgresql:
  primary:
    containerSecurityContext:
      enabled: false
    podSecurityContext:
      enabled: false
ui-redis:
  master:
    podSecurityContext:
      enabled: false
    containerSecurityContext:
      enabled: false

Install software

Run the following command to install the software:

helm repo add anchore https://charts.anchore.io helm install anchore -f values.yaml anchore/enterprise

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running oc get pods:

$ oc get pods
NAME                                                              READY     STATUS    RESTARTS   AGE
anchore-enterprise-analyzer-7f9c7c65c8-tp8cs                      1/1     Running   0          13m
anchore-enterprise-api-754cdb48bc-x8kxt                           3/3     Running   0          13m
anchore-enterprise-catalog-64d4b9bb8-x8vmb                        1/1     Running   0          13m
anchore-enterprise-notifications-65bd45459f-q28h2                 2/2     Running   0          13m
anchore-enterprise-policy-657fdfd7f6-gzkmh                        1/1     Running   0          13m
anchore-enterprise-rbac-manager-f69574b7d-6zqwp                   2/2     Running   0          13m
anchore-enterprise-reports-596cb47894-q8g49                       1/1     Running   0          13m
anchore-enterprise-simplequeue-98b95f985-5xqcv                    1/1     Running   0          13m
anchore-enterprise-ui-6794bbd47-vxljt                             1/1     Running   0          13m
anchore-feeds-77b8976c4c-rs8h2                                    1/1     Running   0          13m
anchore-feeds-db-0                                                1/1     Running   0          13m
anchore-postgresql-0                                              1/1     Running   0          13m
anchore-ui-redis-master-0                                         1/1     Running   0          13m

Create route objects

Create two route object in the OpenShift console to expose the UI and API services on the public internet:

Note: Route configuration is optional. It is used throughout this guide to expose the Anchore deployment on the public internet.

API Route

api-config

UI Route

ui-config

Routes

routes

Verify by navigating to the anchore-enterprise-ui route hostname:

ui

Anchore System

Verify API route hostname with AnchoreCTL:

Note: Read more on Deploying AnchoreCTL

# ANCHORECTL_URL=http://anchore-engine-anchore-enterprise.apps.54.84.147.202.nip.io ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl system status
...
...

Anchore Feeds

It can take some time to fetch all of the vulnerability feeds from the upstream data sources. Check on the status of feeds with AnchoreCTL:

# ANCHORECTL_URL=http://anchore-engine-anchore-enterprise.apps.54.84.147.202.nip.io ANCHORECTL_USERNAME=admin ANCHORECTL_PASSWORD=foobar anchorectl feed list
...
...

Note: It is not uncommon for the above command to return a: [] as the initial feed sync occurs.

Once the vulnerability feed sync is complete, Anchore can begin to return vulnerability results on analyzed images. Please continue to the Usage section of our documentation for more information.

3 - Deploying AnchoreCTL

In this section you will learn how to deploy and configure AnchoreCTL, the Anchore Enterprise Command Line Interface.

AnchoreCTL is published as a simple binary that can be installed by downloading it or using provided packages for installation in different platforms.

Using AnchoreCTL, you can manage and inspect all aspects of your Anchore Enterprise deployments, either as a manual human-readable configuration/instrumentation/control tool or as a CLI that is designed to be used in scripted environments such as CI/CD and other automation environments.

Installation

AnchoreCTL’s release version coincides with the release version of Anchore Enterprise. For example,

  • Enterprise v5.0.0
  • AnchoreCTL v5.0.0

It is highly recommended that the version of AnchoreCTL you are using is supported by the deployed version of Enterprise. Please refer to the Enterprise Release Notes for the supported version of AnchoreCTL.

MacOS/Linux

Specify a release version and destination directory for the installation:

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> <RELEASE_VERSION>

Alternatively, you can download a specific version without installation:

curl -o anchorectl.tar.gz https://anchorectl-releases.anchore.io/anchorectl/v5.0.0/anchorectl_5.0.0_linux_amd64.tar.gz

Windows

For windows, you must specify the version of AnchoreCTL to download if using a script.

curl -o anchorectl.zip https://anchorectl-releases.anchore.io/anchorectl/v5.0.0/anchorectl_5.0.0_windows_amd64.zip

Configuration

Anchorectl configuration search paths have the following precedence:

  1. .anchorectl.yaml
  2. anchorectl.yaml
  3. .anchorectl/config.yaml
  4. ~/.anchorectl.yaml
  5. ~/anchorectl.yaml
  6. $XDG_CONFIG_HOME/anchorectl/config.yaml

Required options:

  • url
  • username
  • password

Default options:

# the Anchore Enterprise account that the user is a part of (env var: "ANCHORECTL_ACCOUNT")
account: ""

# the Anchore Enterprise user's login password (env var: "ANCHORECTL_PASSWORD")
password: ""

# the Anchore Enterprise username (env var: "ANCHORECTL_USERNAME")
username: ""

# the URL to the Anchore Enterprise API (env var: "ANCHORECTL_URL")
url: ""

debug:
  # log HTTP requests, responses, headers, and body (requires log level debug or trace) (env var: "ANCHORECTL_DEBUG_API")
  api: false

  # log all events on the internal event bus and poll rich objects read from the bus (env var: "ANCHORECTL_DEBUG_EVENTS")
  events: false


http:
  # default HTTP headers to add to all HTTP requests (env var: "ANCHORECTL_HTTP_HEADERS")
  headers: {}

  # disable SSL certificate verification for all HTTP calls (not recommended) (env var: "ANCHORECTL_HTTP_TLS_INSECURE")
  tls-insecure: false

  # time in seconds before cancelling an HTTP request (env var: "ANCHORECTL_HTTP_TIMEOUT")
  timeout: 180


log:
  # error, warn, info, debug, trace (env var: "ANCHORECTL_LOG_LEVEL")
  level: "warn"

  # file to write all loge entries to (env var: "ANCHORECTL_LOG_FILE")
  file: ""


update:
  # check for a new version of anchorectl at startup (env var: "ANCHORECTL_UPDATE_CHECK")
  check: true

  # the URL used to check for application updates (env var: "ANCHORECTL_UPDATE_URL")
  url: "https://anchorectl-releases.anchore.io/anchorectl/releases/latest/metadata.json"

Usage

The anchorectl tool has extensive built-in help information for each command and operation, with many of the parameters allowing for environment overrides. To start with anchorectl, you can run the command with --help to see all the operation sections available:


# anchorectl --help
Usage:
   [flags]
   [command]

Application Config:

  (search locations: .anchorectl.yaml, anchorectl.yaml, .anchorectl/config.yaml, ~/.anchorectl.yaml, ~/anchorectl.yaml, $XDG_CONFIG_HOME/anchorectl/config.yaml)

  # the URL to the Anchore Enterprise API (env var: "ANCHORECTL_URL")
  url: ""

  # the Anchore Enterprise username (env var: "ANCHORECTL_USERNAME")
  username: ""

  # the Anchore Enterprise user's login password (env var: "ANCHORECTL_PASSWORD")
  password: ""

  # the Anchore Enterprise account that the user is a part of (env var: "ANCHORECTL_ACCOUNT")
  account: ""

  update:
    # check for a new version of anchorectl at startup (env var: "ANCHORECTL_UPDATE_CHECK")
    check: true

    # the URL used to check for application updates (env var: "ANCHORECTL_UPDATE_URL")
    url: "https://anchorectl-releases.anchore.io/anchorectl/releases/latest/metadata.json"

  # suppress logging output (env var: "ANCHORECTL_QUIET")
  quiet: false

  log:
    # error, warn, info, debug, trace (env var: "ANCHORECTL_LOG_LEVEL")
    level: "warn"

    # file to write all loge entries to (env var: "ANCHORECTL_LOG_FILE")
    file: ""

  debug:
    # log HTTP requests, responses, headers, and body (requires log level debug or trace) (env var: "ANCHORECTL_DEBUG_API")
    api: false

    # log all events on the internal event bus and poll rich objects read from the bus (env var: "ANCHORECTL_DEBUG_EVENTS")
    events: false

  http:
    # disable SSL certificate verification for all HTTP calls (not recommended) (env var: "ANCHORECTL_HTTP_TLS_INSECURE")
    tls-insecure: false

    # time in seconds before cancelling an HTTP request (env var: "ANCHORECTL_HTTP_TIMEOUT")
    timeout: 180

    # default HTTP headers to add to all HTTP requests (env var: "ANCHORECTL_HTTP_HEADERS")
    headers: map[]

Available Commands:
  account      Account related operations
  application  Application related operations
  archive      Archive rule and image operations
  completion   Generate the autocompletion script for the specified shell
  compliance   Compliance report operations
  correction   Correction related operations
  event        Event related operations
  feed         Feed related operations
  help         Help about any command
  image        Image related operations
  policy       Policy related operations
  registry     Registry credential operations
  repo         Repository related operations
  source       Source repository related operations
  subscription Subscription related operations
  system       System related operations
  user         User related operations
  version      show anchorectl version information

Global Flags:
  -c, --config string   application config file (env: ANCHORECTL_CONFIG)
  -h, --help            help for this command
  -q, --quiet           suppress all logging output (env: ANCHORECTL_QUIET)
  -v, --verbose count   increase verbosity (-v = info, -vv = debug) (env: ANCHORECTL_VERBOSITY)
      --version         version for this command

Use "[command] --help" for more information about a command.

Once installed and configured, a good way to quickly test that your anchorectl client is ready to use against a deployed and running Anchore Enterprise endpoint is to exercise the system status call, which will display status information fetched from your Enterprise deployment.

With ~/.anchorectl.yaml installed and populated correctly, no environment or parameters are required:


# anchorectl system status
 ✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ rbac_authorizer │ anchore-quickstart │ http://rbac-authorizer:8228 │ true │ available      │ 500        │ 5.0.0        │
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 500        │ 5.0.0        │
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 500        │ 5.0.0        │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 500        │ 5.0.0        │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 500        │ 5.0.0        │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 500        │ 5.0.0        │
│ rbac_manager    │ anchore-quickstart │ http://rbac-manager:8228    │ true │ available      │ 500        │ 5.0.0        │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 500        │ 5.0.0        │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 500        │ 5.0.0        │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 500        │ 5.0.0        │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

Without setting up ~/.anchorectl.yaml or any configuration file, you can interact using environment variables:


ANCHORECTL_URL="http://localhost:8228" ANCHORECTL_USERNAME="admin" ANCHORECTL_PASSWORD="foobar" anchorectl system status
 ✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ rbac_authorizer │ anchore-quickstart │ http://rbac-authorizer:8228 │ true │ available      │ 500        │ 5.0.0        │
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 500        │ 5.0.0        │
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 500        │ 5.0.0        │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 500        │ 5.0.0        │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 500        │ 5.0.0        │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 500        │ 5.0.0        │
│ rbac_manager    │ anchore-quickstart │ http://rbac-manager:8228    │ true │ available      │ 500        │ 5.0.0        │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 500        │ 5.0.0        │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 500        │ 5.0.0        │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 500        │ 5.0.0        │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

Next Steps

Once the AnchoreCTL has been installed and configured, learn about using Anchore Enterprise

4 - Upgrading Anchore Enterprise

Upgrading from one version of Anchore Enterprise to another is normally handled seamlessly by the Helm chart or the docker-compose configuration files that are provided along with each release. Those follow the general methods from this guide. See Specific Instructions section for special instructions related to specific versions.

Upgrade scenarios

Anchore Enterprise is distributed as a docker image, which is composed of smaller micro-services that can be deployed in a single container or scaled out to handle load.

To retrieve the version of a running instance of Anchore, the anchorectl system status command can be run. The last column titled “CODE VERSION”, will display the running version of each service.

anchorectl system status
 ✔ Status system                                                                                                                                                                                                                                                            
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE         │ HOST ID            │ URL                         │ UP   │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ analyzer        │ anchore-quickstart │ http://analyzer:8228        │ true │ available      │ 25         │ 4.9.3        │
│ apiext          │ anchore-quickstart │ http://api:8228             │ true │ available      │ 25         │ 4.9.3        │
│ rbac_manager    │ anchore-quickstart │ http://rbac-manager:8228    │ true │ available      │ 25         │ 4.9.3        │
│ notifications   │ anchore-quickstart │ http://notifications:8228   │ true │ available      │ 25         │ 4.9.3        │
│ catalog         │ anchore-quickstart │ http://catalog:8228         │ true │ available      │ 25         │ 4.9.3        │
│ rbac_authorizer │ anchore-quickstart │ http://rbac-authorizer:8228 │ true │ available      │ 25         │ 4.9.3        │
│ reports_worker  │ anchore-quickstart │ http://reports-worker:8228  │ true │ available      │ 25         │ 4.9.3        │
│ reports         │ anchore-quickstart │ http://reports:8228         │ true │ available      │ 25         │ 4.9.3        │
│ simplequeue     │ anchore-quickstart │ http://queue:8228           │ true │ available      │ 25         │ 4.9.3        │
│ policy_engine   │ anchore-quickstart │ http://policy-engine:8228   │ true │ available      │ 25         │ 4.9.3        │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘

In this example the Anchore version is 4.9.3 and the database schema is version 25. In cases where the database schema is changed between releases, Anchore will upgrade the database schema at launch.

Pre-upgrade Procedure

Prior to upgrading Anchore, it is highly recommended to perform a database backup/snapshot by stopping your Anchore installation, and backup the database in its entirety. There is no automatic downgrade capability, thus the only way to downgrade after an upgrade (whether it succeeds or fails) is to restore your database contents to a state from a prior version of Anchore, and explicitly run the compatible version of Anchore against the corresponding database contents.

Whether you wish to have the ability to downgrade or not, we recommend backing up your Anchore database prior to upgrading the software as a best practice.

Upgrade Procedure (for deployments using Helm)

A Helm pre-upgrade hook initiates a Kubernetes job that scales down all active Anchore Enterprise pods and handles the Anchore database upgrade.

The Helm upgrade is marked as successful only upon the job’s completion. This process causes the Helm client to pause until the job finishes and new Anchore Enterprise pods are initiated. To monitor the upgrade, follow the logs of the upgrade jobs. These jobs are automatically removed after a subsequent successful Helm upgrade.

An optional post-upgrade hook is available to perform Anchore Enterprise upgrades without forcing all pods to terminate prior to running the upgrade. This is the same upgrade behavior that was enabled by default in the legacy anchore-engine chart. To enable the post-upgrade hook, set upgradeJob.usePostUpgradeHook=true in your values file.

For the latest upgrade instructions using the Helm chart, please refer to the official Anchore Helm Chart documentation

Performing the Upgrade

  1. View the release notes for the latest Anchore Enterprise chart version and perform any necessary steps prior to upgrading.

  2. Update the Helm repository to get the latest chart version.

    helm repo update
    
  3. Upgrade Anchore Enterprise using the Helm chart.

    export NAMESPACE=anchore
    export RELEASE=my-release
    
    helm upgrade ${RELEASE} -n ${NAMESPACE} anchore/enterprise -f anchore_values.yaml
    

Upgrade Procedure (example with docker-compose)

  1. Stop all running instances of Anchore

    docker compose down
    
  2. Make a copy of your original docker-compose.yaml file as backup

    cp docker-compose.yaml docker.compose.yaml.backup
    
  3. Download the latest docker-compose.yaml

    curl https://docs.anchore.com/current/docs/deployment/docker_compose/docker-compose.yaml > docker-compose.yaml
    
  4. Review the latest docker-compose.yaml and merge any edits/changes from your original docker-compose.yaml.backup to the latest docker-compose.yaml

  5. Restart the Anchore containers

    docker compose up -d
    

To monitor the progress of your upgrade, you can watch the docker logs from your catalog container, where you should see some initial output indicating whether or not an upgrade is needed or being performed, followed by the regular Anchore log output.

docker compose logs -f catalog

Once completed, you can review the new state of your Anchore install to verify the new version is running using the regular system status command.

anchorectl system status

Advanced / Manual Upgrade Procedure

If for any reason the automated upgrade fails, or you would like to perform the upgrade of the anchore database manually, you can use the following (general) procedure. This should only be done by advanced operators after backing up the anchore database, ensuring that the anchore database is up and running, and that all running anchore components are stopped.

  • Install the desired Anchore container manually.
  • Run the Anchore container but override the entrypoint to run an interactive shell instead of the default ‘anchore-manager service start’ entrypoint command.
  • Manually execute the database upgrade command, using the appropriate db_connect string. For example, if using Postgres, the db_connect string will look like postgresql://$ANCHORE_DB_HOST/$ANCHORE_DB_NAME?user=$ANCHORE_DB_USER&password=$ANCHORE_DB_PASSWORD
$ anchore-manager db --db-connect "postgresql://$ANCHORE_DB_HOST/$ANCHORE_DB_NAME?user=$ANCHORE_DB_USER&password=$ANCHORE_DB_PASSWORD" upgrade

[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_connect_args": {"timeout": 86400, "ssl": false}, "db_pool_size": 30, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
...
...
  • The output will indicate whether or not a database upgrade is needed. It will then prompt for confirmation if it is, and will display upgrade progress output before completing.

Specific Version Upgrades


This section is intended as a guide for any special instructions and information related to upgrading to specific versions of Enterprise.

Upgrading Enterprise to 4.4.1

If you are upgrading from an Anchore Enterprise version prior to 4.2.0, there is a known issue that will require you to upgrade to 4.2.0 or 4.3.0 first. Once completed, you will have no issues upgrading to 4.4.1. Please contact Anchore Support if you need further assistance.

Please Note: This issue was addressed in 4.5.0. Upgrading from a version prior to 4.2.0 will succeed in 4.5.0 and newer releases.

4.1 - 5.0 Migration Guide

This guide will help you understand, plan, and execute the migration of your Anchore deployment from Enterprise 4.x to 5.0. The Enterprise 5.0 motion involves several breaking changes and is a migration that is more complex than the regular Anchore feature release upgrade.

There are 4 significant component changes required to migrate to Enterprise 5.0 that each have their own migration paths. This document will help you migrate all components in a safe and downtime-minimizing way.

The components are:

  1. Anchore Enterprise: provides a new V2 API.
    • 5.0 will support only the new V2 API
    • 4.9 supports both V1 and V2 APIs
  2. PostgreSQL Database: required version 13+ for Enterprise 5.0
  3. Enterprise Helm Chart:
    • 5.0 can be deployed only with the new enterprise Helm chart.
    • The older anchore-engine chart will be at end-of-life with the 4.x series.
  4. Integrations & Clients: all Anchore-provided integrations have new released versions that are compatible with 5.0 and support the new V2 API.

Note: This is a recommended migration process that ends with a running 5.0 deployment, but you may start the migration today or wait until after 5.0 is released. The Anchore software does not force you to upgrade at new releases so take your time and plan the migration steps when it makes sense for you.

This guide will walk you through the process to go from this starting state.

Pre Migration: <= v4.8 with V1 API Only

graph anchore("Enterprise <= v4.8 w/V1 API") db[("PostgreSQL 9.6")] chart["anchore-engine chart"] ctl["anchorectl v1.x"] anchore --uses--> db chart --deploys--> anchore ctl --v1 api calls-->anchore

To this ending state where you are in production running Enterprise v5.0.0.

Post Migration: Full 5.0 with V2 API Only

graph anchore("Enterprise v5.0.x w/V2 API") db[("PostgreSQL 13+")] chart["enterprise chart"] ctl["anchorectl v5.0.x"] anchore --uses--> db; chart --deploys--> anchore ctl --v2 api calls-->anchore

Note: The upgrade to v4.9.x is very strongly recommended for all deployments as a key part of the migration process to 5.0. If you use ANY integrations or API calls you should use v4.9.x and its dual-API support as the version of Anchore to run while you migrate all you integrations to use the V2 API.

Planning Your Migration

Timing: Each phase has different duration expectations, and below we’ll review the expectations and process for each phase of the migration. You should expect and plan for downtime for each phase except the client API migrations, which are done while the system is running.

The migration may be a multi-day process since it involves things like client migrations that may take days or weeks depending on your org and how many other systems are integrated with your Anchore deployment.

Combining Phases: Phases can be combined if you wish to use a smaller number of larger maintenance windows. Since combining phases increases the complexity of each phase and associated risk of misconfigurations or errors, the combination should be carefully considered for your specific needs and risk tolerance.

Migration Path 1: Chart-Managed Database

If you have PostgreSQL deployed in Kubernetes using the Anchore-Engine Helm Chart, then this is the migration path for you.

graph subgraph Start %% Start at v4.8.x or earlier, using postgres 9.6 and the anchore-engine helm chart anchore4("Enterprise <= v4.8.x") pg9[("PostgreSQL 9.6")] engineChart["anchore-engine chart"] anchorectl("anchorectl v1.7.x") --V1 api calls--> anchore4 anchore4 --uses--> pg9 engineChart --deploys--> anchore4 end subgraph step1[Latest Enterprise v4.9.x] %% Upgrade to v4.9.x for V2 API anchore49_1("Enterprise v4.9.x") pg9_2[("PostgreSQL 9.6")] engineChart1["anchore-engine chart"] anchore49_1 --uses--> pg9_2 anchorectl3("anchorectl v1.8.x") --V1 api calls--> anchore49_1 engineChart1 --deploys--> anchore49_1 end subgraph step2[Chart and DB Migrated] %% Migrate to new Chart & DB Migration to PG13, no Anchore version change anchore49("Enterprise = v4.9.x") pg13[("PostgreSQL 13+")] pg96[("PostgreSQL 9.6")] engineChart2["anchore-engine chart"] enterpriseChart["enterprise chart"] engineChart2 --uses--> pg96 pg96 --migrates to--> pg13 anchore49 --uses--> pg13 anchorectl2("anchorectl v1.8.x") --V1 api calls--> anchore49 enterpriseChart --deploys--> anchore49 end subgraph step3[Integrations Migrated] %% Upgrade integrations/AnchoreCTL anchoreInter3("Enterprise v4.9.x") engineChart3["anchore-engine chart"] enterpriseChart2["enterprise chart"] pg13_4[("PostgreSQL 13+")] pg96_2[("PostgreSQL 9.6")] engineChart3 --> pg96_2 anchoreInter3 --> pg13_4 anchorectl5("anchorectl v4.9.x") --V2 api calls--> anchoreInter3 enterpriseChart2 --deploys--> anchoreInter3 end subgraph finish["Enterprise v5.0"] %% Upgrade to v5.0 anchore5("Enterprise v5.x") enterpriseChart3["enterprise chart"] pg13_5[("PostgreSQL 13+")] anchore5 --> pg13_5 anchorectl6("anchorectl v5.0.x") --V2 api calls--> anchore5 enterpriseChart3 --deploys--> anchore5 end Start --Upgrade Anchore Enterprise to latest v4.9.x release--> step1; step1 --Migrate to Enterprise Chart and PG13+ DB--> step2; step2 --Migrate integrations & anchorectl to use V2 API--> step3; step3 --Upgrade Anchore Enterprise to v5.x & delete 4.0.x deployment--> finish;

Step 1: Upgrade Anchore Enterprise to latest v4.9.x Release

Downtime: Required

Upgrade your Anchore deployment to v4.9.x. This is an important step for several reasons:

  1. It is supported by both the legacy anchore-engine helm chart and the new enterprise helm chart
  2. It supports PostgreSQL 9.6+ and newer (13+), so it provides a stable base to execute the other upgrade steps
  3. It supports both the V1 and V2 APIs, so you can have a stable Anchore version for updating all your integrations

Upgrade mechanism: normal Anchore Enterprise upgrade process

Step 2: Migrate to Enterprise Chart and PostgreSQL 13

Downtime: Required

Helm Migration Guide

Step 3: Migrate all integrations and clients to V2 API compatible versions

Downtime: None for Anchore itself, but individual integrations may vary

Once your deployment is running v4.9.x you have a stable platform to migrate integrations and clients to using the V2 API of Enterprise. You should perform the upgrades/migrations for the new V2 API in this phase. This phase may last for a while and does not end until all your API calls are using the V2 endpoint instead of V1.

IntegrationRecommended V2 API Compatible Version
AnchoreCTLv4.9.0
anchore-k8s-inventoryv1.1.1
anchore-ecs-inventoryv1.2.0
Kubernetes Admission Controllerv0.5.0
Jenkins Pluginv1.1.0
Harbor Scanner Adapterv1.2.0
enterprise-gitlab-scanv4.0.0

Upgrading AnchoreCTL Usage in CI

The installation script provided via Deploying AnchoreCTL will only automatically deploy new releases that are V1 API compatible, so you need to update use of that script to include specific versions.

For example, use:

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v4.9.0

Confirming V1 API is no longer in use

To verify that all clients have been updated, you can review the logs from the API containers in your v4.9.x deployment. We recommend that you monitor for multiple days to verify there are no periodic processes that still use the old endpoint.

Step 4: Upgrade to Enterprise v5.0

Downtime: required

Helm Upgrade Guide

Upgrading AnchoreCTL

You will want to install the v5.0.1 compatible version of AnchoreCTL at this time as well.

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v5.0.1

Migration Path 2: External DB

If you deploy PostgreSQL using any mechanism other than the Anchore-provided chart (e.g. AWS RDS, your own DB chart, Google CloudSQL, etc.), then this is the migration plan for you.

graph subgraph Start[Enterprise v4.x] anchoreStart("Enterprise <= v4.8.X") pg9[("PostgreSQL 9.6")] engineChart["anchore-engine chart"] anchorectl("anchorectl v1.7.x") --V1 api calls--> anchoreStart anchoreStart --uses--> pg9 engineChart --deploys--> anchoreStart end subgraph step1[Latest Enterprise v4.9.x] %% Upgrade to v4.9.x for V2 anchoreInter1("Enterprise v4.9.x") pg9_2[("PostgreSQL 9.6")] engineChart2["anchore-engine chart"] anchoreInter1 --uses--> pg9_2 anchorectl3("anchorectl v1.8.x") --V1 api calls--> anchoreInter1 engineChart2 --deploys--> anchoreInter1 end subgraph step2[Enterprise Helm Chart] %% Use new chart anchoreInter2("Enterprise v4.9.x") enterpriseChart["enterprise chart"] pg9_3[("PostgreSQL 9.6")] anchoreInter2 --> pg9_3 anchorectl4("anchorectl v1.8.x") --V1 api calls--> anchoreInter2 enterpriseChart --deploys--> anchoreInter2 end subgraph step3[PostgreSQL 13+] %% Migrate to PG13+ , no Anchore version change anchoreInter3("Enterprise = v4.9.x") pg13[("PostgreSQL 13+")] enterpriseChart2["enterprise chart"] anchoreInter3 --uses--> pg13 anchorectl2("anchorectl v1.8.x") --V1 api calls--> anchoreInter3 enterpriseChart2 --deploys--> anchoreInter3 end subgraph step4[Integrations using V2 API] %% Upgrade integrations/AnchoreCTL anchoreInter4("Enterprise v4.9.x") enterpriseChart3["enterprise chart"] pg13_4[("PostgreSQL 13+")] anchoreInter4 --> pg13_4 anchorectl5("anchorectl v4.9.x") --V2 api calls--> anchoreInter4 enterpriseChart3 --deploys--> anchoreInter4 end subgraph finish[Enterprise v5.0.x] %% Upgrade to v5.0.x anchore5("Enterprise v5.0.x") enterpriseChart4["enterprise chart"] pg13_5[("PostgreSQL 13+")] anchore5 --> pg13_5 anchorectl6("anchorectl v5.0.x") --V2 api calls--> anchore5 enterpriseChart4 --deploys--> anchore5 end Start --Upgrade to latest v4.9.x Enterprise--> step1; step1 --Migrate to Enterprise Helm Chart--> step2; step2 --Upgrade External DB to PostgreSQL 13+--> step3; step3 --Migrate Integrations and AnchoreCTL to use V2 API--> step4; step4 --Upgrade Anchore to v5.0.x --> finish;

Step 1: Upgrade to latest Anchore Enterprise v4.9.x

Downtime: Required

Upgrade your Anchore deployment to v4.9.x. This is an important step for several reasons:

  1. It is supported by both the legacy anchore-engine helm chart and the new enterprise helm chart
  2. It supports PostgreSQL 9.6+ and newer (13+), so it provides a stable base to execute the other upgrade steps
  3. It supports both the V1 and V2 APIs, so you can have a stable Anchore version for updating all your integrations

Step 2: Upgrade PostgreSQL from 9.6.x to 13+

Downtime: Required

Enterprise v5.0 requires PostgreSQL 13 or later to run. The DB upgrade process will be specific to your deployment mechanisms and way of running Postgres. Depending on what version of PostgreSQL you are running when you start, there may be multiple DB upgrade operations necessary in PostgreSQL to get to 13+.

However, this upgrade can be done with any Anchore version. All 4.x versions of Anchore already support PostgreSQL 13+, so the DB upgrade can be executed outside any changes to the Anchore deployment itself.

If you are using AWS RDS or another cloud platform for hosting your PostgreSQL database, please refer to their upgrade documentation for the best practices to upgrade your instance(s) to version 13 or higher.

Step 3: Migrate to Enterprise Helm Chart

Downtime: Required

Helm Migration Guide

Step 4: Upgrade all your integrations/clients to use the V2 API

Downtime: None for Anchore itself, but individual integrations may vary

Once your deployment is running v4.9.x you have a stable platform to migrate integrations and clients to using the V2 API of Enterprise. You should perform the upgrades/migrations for the new V2 API in this phase. This phase may last for a while and does not end until all your API calls are using the V2 endpoint instead of V1.

IntegrationRecommended V2 API Compatible Version
AnchoreCTLv4.9.0
anchore-k8s-inventoryv1.1.1
anchore-ecs-inventoryv1.2.0
Kubernetes Admission Controllerv0.5.0
Jenkins Pluginv1.1.0
Harbor Scanner Adapterv1.2.0
enterprise-gitlab-scanv4.0.0

Upgrading AnchoreCTL Usage in CI

The installation script provided via Deploying AnchoreCTL will only automatically deploy new releases that are V1 API compatible, so you need to update use of that script to include specific versions.

For example, use:

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v4.9.0

Confirming V1 API is no longer in use

To verify that all clients have been updated, you can review the logs from the API containers in your v4.9.x deployment. We recommend that you monitor for multiple days to verify there are no periodic processes that still use the old endpoint.

Step 5: Upgrade to Enterprise v5.0

Downtime: required

Helm Upgrade Guide

Upgrading AnchoreCTL

You will want to install the v5.0.1 compatible version of AnchoreCTL at this time as well.

curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b <DESTINATION_DIR> v5.0.1

Verifying the Upgrade

Verify the version you’re using of AnchoreCTL anchorectl version – All users should see ‘5.0.1’ for the AnchoreCTL version anchorectl system status – The system should return ‘v5.0.0’