Troubleshooting
This section contains some general troubleshooting for your Anchore Enterprise instance. When troubleshooting Anchore Enterprise, the recommended approach is to first verify all Anchore services are up, use the event subsystem to narrow down particular issues, and then navigate to the logs for specific services to find out more information.
Throughout this section, AnchoreCTL commands will be executed to assist with troubleshooting. For more information AnchoreCTL, please reference the AnchoreCTL section.
1 - Smoke Testing
This term typically refers to a testing methodology which validates critical or crucial functionality of software. Versions of AnchoreCTL post-5.6.0 include a smoke-tests
option, which can be used to validate general functionality of your Anchore Enterprise.
We recommend using this mechanism to validate functionality after upgrades.
Tip: the test check-admin-credentials
will look for an admin user in the admin account context as defined in your anchorectl.yaml
./anchorectl system smoke-tests run
⠇ Running smoke tests
...
✔ Ran smoke tests
┌───────────────────────────────────────┬─────────────────────────────────────────────────┬────────┬────────┐
│ NAME │ DESCRIPTION │ RESULT │ STDERR │
├───────────────────────────────────────┼─────────────────────────────────────────────────┼────────┼────────┤
│ wait-for-system │ Wait for the system to be ready │ pass │ │
│ check-admin-credentials │ Check anchorectl credentials to run smoke tests │ pass │ │
│ create-test-account │ Create a test account │ pass │ │
│ list-test-policies │ List the test policies │ pass │ │
│ get-test-policy │ Get the test policy │ pass │ │
│ activate-test-default-policy │ Activate the test default policy │ pass │ │
│ create-test-image │ Create a test image and wait for analysis │ pass │ │
│ get-test-image │ Get the test image │ pass │ │
│ activate-test-subscription │ Activate a test subscription │ pass │ │
│ get-test-subscription │ Get the test subscription │ pass │ │
│ deactivate-test-vuln-subscription │ Deactivate the vuln subscription │ pass │ │
│ deactivate-test-policy-subscription │ Deactivate the policy subscription │ pass │ │
│ deactivate-test-tag-subscription │ Deactivate the tag subscription │ pass │ │
│ deactivate-test-analysis-subscription │ Deactivate the analysis subscription │ pass │ │
│ check-test-image │ Check the test image │ pass │ │
│ get-test-image-vulnerabilities │ Get the test image vulnerabilities │ pass │ │
│ delete-test-image │ Delete the test image │ pass │ │
│ disable-test-account │ Disable the test account │ pass │ │
│ delete-test-account │ Delete the test account │ pass │ │
└───────────────────────────────────────┴─────────────────────────────────────────────────┴────────┴────────┘
2 - Viewing Logs
Anchore services produce detailed logs that contain information about user interactions, internal processes, warnings and errors. The verbosity of the logs is controlled using the log_level setting in config.yaml (for manual installations) or the corresponding ANCHORE_LOG_LEVEL environment variable (for docker compose or Helm installations) for each service.
The log levels are DEBUG, INFO, WARN, ERROR, and FATAL, where the default is INFO. Most of the time, the default level is sufficient as the logs will container WARN, ERROR and FATAL messages as well. But for deep troubleshooting, it is always recommended to increase the log level to DEBUG in order to ensure the availability of the maximum amount of information.
Anchore logs can be accessed by inspecting the docker logs for any anchore service container using the regular docker logging mechanisms, which typically default to displaying to the stdout/stderr of the containers themselves - for example:
# docker ps
...
33c809f1803a anchore/anchore-engine:latest "/docker-entrypoint.…" 22 hours ago Up 22 hours (healthy) 8228/tcp aevolume_engine-catalog_1
...
# docker logs aevolume_engine-analyzer_1
[service:worker] 2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.twisted/makeService()] [INFO] Initializing configuration
[service:worker] 2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.twisted/makeService()] [INFO] Initializing logging
[service:worker] 2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/initialize()] [DEBUG] Invoking instance-specific handler registration
[service:worker] 2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_register_instance_handlers()] [INFO] Registering api handlers
[service:worker] 2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_process_stage_handlers()] [INFO] Processing init handlers for bootsrap stage: pre_config
[service:worker] 2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_process_stage_handlers()] [DEBUG] Executing 0 stage pre_config handlers
[service:worker] 2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_configure()] [INFO] Loading and initializing global configuration
[service:worker] 2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_configure()] [INFO] Configuration complete
...
...
The logs themselves are also persisted as logfiles inside the Anchore service containers. Executing a shell into any Anchore service container and navigating to /var/log/anchore
, you will find the service log files. For example, using the same analyzer container service as described previously.
# docker exec -t -i aevolume_engine-analyzer_1 /bin/bash
[anchore@687818c10b93 anchore-engine]$ cat /var/log/anchore/anchore-worker.log
2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.twisted/makeService()] [INFO] Initializing configuration
2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.twisted/makeService()] [INFO] Initializing logging
2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/initialize()] [DEBUG] Invoking instance-specific handler registration
2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_register_instance_handlers()] [INFO] Registering api handlers
2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_process_stage_handlers()] [INFO] Processing init handlers for bootsrap stage: pre_config
2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_process_stage_handlers()] [DEBUG] Executing 0 stage pre_config handlers
2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_configure()] [INFO] Loading and initializing global configuration
2019-12-06 00:54:20+0000 [-] [MainThread] [anchore_engine.service/_configure()] [INFO] Configuration complete
...
...
If you are using Kubernetes to run Anchore Enterprise, you can retrieve the logs from the service pods directly using kubectl
commands:
Tip: You can find the desired pod name with kubectl get pods
# k8s-inventory kubectl logs -n <your-namespace> <your-anchore-pod-name>
[uvicorn:anchore-enterprise-apiext] [2024-06-05T19:26:51.508107+00:00] [MainProcess] [MainThread] [INFO] [asyncio.runners/run():190] | 10.244.1.1:55706 - "GET /health HTTP/1.1" 200
[uvicorn:anchore-enterprise-apiext] [2024-06-05T19:26:54.080546+00:00] [MainProcess] [MainThread] [INFO] [asyncio.runners/run():190] | 10.244.1.2:56838 - "GET /health HTTP/1.1" 200
[uvicorn:anchore-enterprise-apiext] [2024-06-05T19:26:54.085999+00:00] [MainProcess] [MainThread] [INFO] [asyncio.runners/run():190] | 10.244.1.2:56854 - "GET /health HTTP/1.1" 200
[uvicorn:anchore-enterprise-apiext] [2024-06-05T19:26:54.089404+00:00] [MainProcess] [MainThread] [INFO] [asyncio.runners/run():190] | 10.244.1.2:56860 - "GET /version HTTP/1.1" 200
[uvicorn:anchore-enterprise-apiext] [2024-06-05T19:26:56.773285+00:00] [MainProcess] [MainThread] [INFO] [asyncio.runners/run():190] | 10.244.1.1:46424 - "GET /health HTTP/1.1" 200
[uvicorn:anchore-enterprise-apiext] [2024-06-05T19:27:00.508614+00:00] [MainProcess] [MainThread] [INFO] [asyncio.runners/run():190] | 10.244.1.5:59244 - "POST /v2/kubernetes-inventory HTTP/1.1" 201
3 - Viewing System Events
If you’ve successfully verified that all Anchore Enterprise services are up, but are still running into issues operating Anchore, a good place check is the event log.
The event log subsystem provides users with a mechanism to inspect asynchronous events occurring across various Anchore Enterprise services. Anchore events include periodically-triggered activities such as vulnerability data feed sync in the policy_engine service, image analysis failures originating from the analyzer service, and other informational or system fault events. The catalog service may also generate events for any repositories or image tags that are being watched when Anchore Enterprise encounters connectivity, authentication, authorization, or other errors in the process of checking for updates.
The event log is aimed at troubleshooting most common failure scenarios, especially those that happen during asynchronous operations, and to pinpoint the reasons for failures that can be used subsequently to help with corrective actions. Events can be cleared from Anchore Enterprise in bulk or individually.
Viewing Events
Running the following command will give a list of recent Anchore events: anchorectl event list
# Viewing list of recent Anchore events
# anchorectl event list
✔ List events
┌──────────────────────────────────┬──────────────────────────────────────────────┬───────┬───────────────────────────────────────────────────────┬─────────────────┬────────────────┬────────────────────┬─────────────────────────────┐
│ UUID │ EVENT TYPE │ LEVEL │ RESOURCE ID │ RESOURCE TYPE │ SOURCE SERVICE │ SOURCE HOST │ TIMESTAMP │
├──────────────────────────────────┼──────────────────────────────────────────────┼───────┼───────────────────────────────────────────────────────┼─────────────────┼────────────────┼────────────────────┼─────────────────────────────┤
│ 329ff24aa77549458e2656f1a6f4c98f │ system.image_analysis.registry_lookup_failed │ error │ dockerr.io/alpine:3.4 │ image_reference │ catalog │ anchore-quickstart │ 2022-08-24T22:08:29.026352Z │
│ 4010f105cf264be6839c7e8ca1a0c46e │ system.image_analysis.registry_lookup_failed │ error │ dockerr.io/alpine:latest │ image_reference │ catalog │ anchore-quickstart │ 2022-08-24T22:08:28.991101Z │
│ 6924eb83313746ff8b842a88654e3ac1 │ system.image_analysis.registry_lookup_failed │ error │ dockerr.io/alpine:3.12 │ image_reference │ catalog │ anchore-quickstart │ 2022-08-24T22:08:28.956321Z │
│ efdcf727647c458f85cb6464926e474d │ system.image_analysis.registry_lookup_failed │ error │ dockerr.io/nginx:latest │ image_reference │ catalog │ anchore-quickstart │ 2022-08-24T22:08:28.920222Z │
...
│ 1eb04509b2bc44208cdc7678eaf76fef │ user.image.analysis.completed │ info │ docker.io/ubuntu:latest │ image_tag │ analyzer │ anchore-quickstart │ 2022-08-24T22:06:13.736004Z │
│ 6f735f8db7e84ce19b221d3b024318af │ user.image.analysis.processing │ info │ docker.io/ubuntu:latest │ image_tag │ analyzer │ anchore-quickstart │ 2022-08-24T22:06:13.128912Z │
│ 480eb191f87440b48c9f8cfa6529badf │ user.image_tag.added │ info │ docker.io/ubuntu:latest │ image_tag │ catalog │ anchore-quickstart │ 2022-08-24T22:06:08.307039Z │
...
└──────────────────────────────────┴──────────────────────────────────────────────┴───────┴───────────────────────────────────────────────────────┴─────────────────┴────────────────┴────────────────────┴─────────────────────────────┘
Details about a specific event
If you would like more information about a specific event, you can run the following command: anchorectl event get <event-id>
# Details about a specific Anchore event
# anchorectl event get 1eb04509b2bc44208cdc7678eaf76fef
✔ Fetched event
UUID: 1eb04509b2bc44208cdc7678eaf76fef
Event:
Event Type: user.image.analysis.completed
Level: info
Message: Image analysis available
Resource:
Resource ID: docker.io/ubuntu:latest
Resource Type: image_tag
User Id: admin
Source:
Source Service: analyzer
Base Url: http://analyzer:8228
Source Host: anchore-quickstart
Request Id:
Timestamp: 2022-08-24T22:06:13.736004Z
Category:
Details:
Created At: 2022-08-24T22:06:13.832881Z
Note: Depending on the output from the detailed events, looking into the logs for a particular servicename (example: policy_engine) is the next troubleshooting step.
4 - Verifying Feeds
Anchore Enterprise runs a feed service which downloads vulnerability data from a number of configurable sources.
This data is stored on disk and is processed into a holistic vulnerability dataset. Once built and compiled, the dataset is stored in a postgres database and served to other Anchore Enterprise application services via an API endpoint.
The API endpoint is periodically queried by the policy service to fetch the latest dataset.
If a newer vulnerability dataset is available, the policy service will download and propagate the new dataset across all instances/pods. This updated data is used to generate vulnerability analysis results.
The accuracy of vulnerability analysis is determined by the ability for your Anchore Enterprise deployment to download, store and distribute this feed data. Ensuring the health of this process is critical to the operation of the platform.
Run $ anchorectl feed list
as admin and ensure that:
- The last sync date shown is recent and that the feed has enabled set to true.
- Review the feed list to see if you have the required feed sources enabled.
- Missing an expected source? Review the configuration and operational sections on this page
Run $ anchorectl feed sync
as admin which will:
- Queue an update to fetch and propagate feed data across internal services.
- Otherwise, this runs on a regular schedule.
- Note: It can take several hours to download, build and distribute the dataset.
You can also visually check the health in the ‘System’ section of the UI when logged in as admin.
Recommended best practices
Anchore Enterprise relies on multiple sources in order to build a high resolution picture of vulnerability data.
It is generally recommended that all vendor sources are enabled.
Direct Mode
We highly recommend that you enable the GitHub GHSA feed for high quality vulnerability data
- This can often be misconfigured due to the nature of the nested feeds property in helm values file, due to the feeds service being a dependancy and independent chart.
- You must also ensure you set a valid GitHub Token, see below for a complete Helm values example.
feeds:
anchoreConfig:
feeds:
drivers:
github:
enabled: true
# The GitHub feeds driver requires a GitHub developer personal access token with no permission scopes selected.
# See https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token
token: your-github-token
Direct & Proxy Mode
Enable the applicable feeds within Anchore e.g. if you use Ubuntu base images, enable the vendor specific feed in this case: Ubuntu. See Helm values example below
feeds:
anchoreConfig:
feeds:
drivers:
PROVIDER: true
For an air-gapped environment your high side feed service should be configured with api_only = true. This will ensure the feed service will only serve built databases stored locally.
Finally, operate your feeds database on a separate server to the anchore server. This ensures separation of concerns, ease of backup and less performance impacts. Where possible use a managed postgres service like RDS.
Operational & Configuration checks
Check that the feed pod/container has enough disk space:
Storage
- Ensure free disk space is (>10gb) for the /workspace directory where feed data is stored on the feeds pod / container.
- Exec onto the pod / container and check disk usage with
$ df -h
- You can specify an external volume mount for the /workspace directory over using the default node / host disk.
- If you choose to map an external volume, we recommend this to be with a local or fast disk and not a slower storage like NFS.
Compute
- Ensure your feed pod / container has the required compute, memory and disk resources as covered in requirements.
- Ensure your feeds DB server or pod/container has enough resources. We recommend 1 CPU and 2 GB RAM minimum.
Network
- Ensure your feed pod / container has network connectivity to feed source(s) by exec’ing into the container and then:
- Verifying each domain is contactable by running a connectivity test on each source. e.g.
$ curl https://secdb.alpinelinux.org
.- If you are using Proxy Mode, you will want to test
$ curl https://enterprise.vunnel.feed.anchore.io/
. - If you are using Direct Mode, you will want to test all applicable vulnerability sources on the domains listed here.
- If you have a network proxy deployed, you might need to configure your feed service to utilize it:
- Depending on proxy, you might need to also use a custom cert in your Anchore deployment as per instructions here.
- You can test, if this is required/working ok by running
$ curl -v https://secdb.alpinelinux.org
and verifying the output returns “SSL certificate verify ok.”
- Ensure your policy pod / container has network connectivity to your local feed pod / container
- Run e.g.
curl http://anchore-feeds:8448/v2/databases/grypedb
OR curl $ANCHORE_GRYPE_DB_URL
returns a success response to confirm connectivity.
Configuration
- Check your policy pod / container configuration by exec’ing into the container and then:
- Output the configuration
$ cat '/config/config.yaml'
and check policy_engine.vulnerabilities.sync.data.grypedb.url property is set.- Ensure the set value e.g.
http://anchore-feeds:8448/v2/databases/grypedb
OR $ echo $ANCHORE_GRYPE_DB_URL
points to your local desired feed service.- Query the set value with
curl
and review the response, which should be a list of one or more grypedb vulnerability records. - Update your Helm values or Compose Env variables to resolve.
- PS: This can also point to our OSS feeds, if no value is set. The OSS feeds offer less data than the Enterprise feeds.
- Check that your desired configuration has been correctly applied for both the feeds and policy services:
- In each pod / container run
$ cat '/config/config.yaml'
or /scripts/anchore-config
for Helm deployments.- Review the policy_engine and feeds: sections in the respective configuration output and ensure it matches your desired configuration and adjust if required.
Operational
- Patience - For both Direct and Proxy Modes, the feed data can take several hours to download, build and distribute an updated vulnerability dataset.
- Check the status of a feeds sync “run” on the feeds pod / container by:
- Reviewing the feed logs for any errors. You can enable DEBUG logging for increased verbosity.
- Querying the feeds tasks api to return a metadata about the current and past feeds “runs”. For the current run you need to find the highest task id that is a parent id.
- Helm -
$ curl ${ANCHORE_FEEDS_EXTERNAL_URL}tasks
and for the latest “run” results use $ curl ${ANCHORE_FEEDS_EXTERNAL_URL}tasks/<parent-id-or-task-id>
. - Compose -
$ curl http://feeds:8448/v2/tasks
and for the latest “run” results use $ curl http://feeds:8448/v2/tasks/<parent-id-or-task-id>
.
NOTE:
- On occasion a feed source might be experiencing downtime or a maintenance window. In this scenario, the feeds service will continue to operate for the remaining providers and will retry on the next run to fetch the latest data.
5 - Verifying Service Health
You can verify which services have registered themselves successfully, along with their status, by running: anchorectl system status
# anchorectl system status
✔ Status system
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE │ HOST ID │ URL │ UP │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ reports │ anchore-quickstart │ http://reports:8228 │ true │ available │ 590 │ 5.9.0 │
│ analyzer │ anchore-quickstart │ http://analyzer:8228 │ true │ available │ 590 │ 5.9.0 │
│ notifications │ anchore-quickstart │ http://notifications:8228 │ true │ available │ 590 │ 5.9.0 │
│ apiext │ anchore-quickstart │ http://api:8228 │ true │ available │ 590 │ 5.9.0 │
│ policy_engine │ anchore-quickstart │ http://policy-engine:8228 │ true │ available │ 590 │ 5.9.0 │
│ reports_worker │ anchore-quickstart │ http://reports-worker:8228 │ true │ available │ 590 │ 5.9.0 │
│ simplequeue │ anchore-quickstart │ http://queue:8228 │ true │ available │ 590 │ 5.9.0 │
│ catalog │ anchore-quickstart │ http://catalog:8228 │ true │ available │ 590 │ 5.9.0 │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘
Note: If specific services are down, you can investigate the logs for the services. For more information, see Viewing Logs.
The -vvv and –json options
Passing the -vvv
option to AnchoreCTL can often help narrow down particular issues by displaying the client configuration and client functions as they are running:
# Example system status with -vvv
# anchorectl -vvv system status
[0000] INFO anchorectl version: 5.9.0
[0000] DEBUG application config:
url: http://localhost:8228
username: admin
password: '******'
...
[0000] DEBUG command config:
format: text
[0000] DEBUG checking if new version of anchorectl is available
[0000] TRACE worker stopped component=eventloop
[0000] TRACE bus stopped component=eventloop
┌─────────────────┬────────────────────┬─────────────────────────────┬──────┬────────────────┬────────────┬──────────────┐
│ SERVICE │ HOST ID │ URL │ UP │ STATUS MESSAGE │ DB VERSION │ CODE VERSION │
├─────────────────┼────────────────────┼─────────────────────────────┼──────┼────────────────┼────────────┼──────────────┤
│ reports │ anchore-quickstart │ http://reports:8228 │ true │ available │ 590 │ 5.9.0 │
│ analyzer │ anchore-quickstart │ http://analyzer:8228 │ true │ available │ 590 │ 5.9.0 │
│ notifications │ anchore-quickstart │ http://notifications:8228 │ true │ available │ 590 │ 5.9.0 │
│ apiext │ anchore-quickstart │ http://api:8228 │ true │ available │ 590 │ 5.9.0 │
│ policy_engine │ anchore-quickstart │ http://policy-engine:8228 │ true │ available │ 590 │ 5.9.0 │
│ reports_worker │ anchore-quickstart │ http://reports-worker:8228 │ true │ available │ 590 │ 5.9.0 │
│ simplequeue │ anchore-quickstart │ http://queue:8228 │ true │ available │ 590 │ 5.9.0 │
│ catalog │ anchore-quickstart │ http://catalog:8228 │ true │ available │ 590 │ 5.9.0 │
└─────────────────┴────────────────────┴─────────────────────────────┴──────┴────────────────┴────────────┴──────────────┘
Passing the --json
option to AnchoreCTL commands will output the API response data in JSON, which often contains much more information than what the CLI outputs by default for both regular successful operations, and for operations that are resulting in an error:
# anchorectl -o json system status
✔ Status system
{
"serviceStates": [
{
"baseUrl": "http://reports_worker:8228",
"hostid": "anchore-quickstart",
"serviceDetail": {
...
...