Prometheus
Anchore Enterprise exposes prometheus metrics in the API of each service if the
config.yaml that is used by that service has the metrics.enabled key set to
true.
Each service exports its own metrics and is typically scraped by a Prometheus installation to gather the metrics. Anchore does not aggregate or distribute metrics between services. You should configure your Prometheus deployment or integration to check each Anchore service’s API using the same port it exports for the /metrics route.
Monitoring in Kubernetes and/or Helm Chart
Prometheus is very commonly used for monitoring Kubernetes clusters. Prometheus is supported by core Kubernetes services. There are many guides on using Prometheus to monitor a cluster and services deployed within, and also many other monitoring systems can consume Prometheus metrics.
The Anchore Helm Chart includes a quick way to enable the Prometheus metrics on each service container:
Set:
helm install --name myanchore anchore/anchore-enterprise --set anchoreConfig.metrics.enabled=trueOr, set it directly in your customized values.yaml
## @param anchoreConfig.metrics.enabled Enable Prometheus metrics for all Anchore services
## @param anchoreConfig.metrics.auth_disabled Disable auth on Prometheus metrics for all Anchore services
##
metrics:
enabled: true
auth_disabled: false
To deploy the Prometheus container and expose it externally, you can configure an Ingress controller, such as NGINX, within the Helm chart values:
# Prometheus Configuration
prometheus:
ingress:
enabled: true
ingressClassName: "nginx" # Specify your Ingress controller
hosts:
- anchoredemo.com # The hostname to access Prometheus
paths: # Note: paths is typically an array
- /prometheus
pathType: Prefix # Or ImplementationSpecific, Exact. 'Prefix' is common.
prometheusSpec:
#retention: 10d
externalUrl: "https://anchoredemo.com/prometheus" # Ensure this matches your ingress path
routePrefix: "/prometheus"
podMonitorSelectorNilUsesHelmValues: false
podMonitorSelector:
matchLabels:
release: "kube-prom-stack"
EOF
Because Anchore scales horizontally, there can often be multiple replicas of each service running at any given time. This scaling behavior makes it essential to use a monitoring approach that can dynamically discover and track all active pods, so, to monitor the Anchore Pods we will be deploying a PodMonitor. Since Anchore creates multiple instances of certain services (for example, analyzers), relying on static service discovery is not always the most practical method and therefore the PodMonitor is our recommended method for pod discovery/tracking.
The PodMonitor resource allows Prometheus to:
Dynamically detect new or removed Anchore pods as the deployment scales up or down.
Automatically update its scrape configuration without manual intervention.
Collect metrics directly from the /metrics endpoints of each pod, even when no Service is defined.
If you run the kubectl get svc command in your Anchore namespace, you will notice that the analyzer does not have a service so it is not reachable through a loadbalancer. PodMonitor will figure this out for Prometheus as each pod uses a different port.
The PodMonitor will find the Anchore pods and edit the Prometheus config to allow it to automatically discover and scrape the metrics.
PodMonitor Example
You must set up the PodMonitor and configure the necessary secrets used to collect the metrics from /metric endpoints on the pod. The following is an example that can be used as a template for your own configuration:
kubectl apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: anchore-enterprise-metrics
namespace: monitoring
labels:
team: my-monitoring-team
release: kube-prom-stack
spec:
namespaceSelector:
matchNames:
- anchore #Set this to match the namespace Anchore is deployed in
selector:
matchLabels:
app.kubernetes.io/part-of: anchore # ADJUST THIS label selector
podMetricsEndpoints:
- path: /metrics # The endpoint path for metrics
# By omitting the 'port' field, Prometheus will attempt to scrape
# the '/metrics' path on ALL declared ports of the selected pods.
scheme: http # Assuming metrics are exposed over HTTP
# Interval and scrapeTimeout values can be uncommented below and customized if needed
# interval: 30s
# scrapeTimeout: 10s
basicAuth:
username:
name: anchore-metrics-credentials # Name of the Secret
key: username # Key inside the Secret
password:
name: anchore-metrics-credentials # Name of the Secret
key: password # Key inside the Secret
EOF
kubectl apply -f - <<EOF
# This secret stores the credentials that Prometheus will use
# to authenticate when scraping metrics from the Anchore service pods.
apiVersion: v1
kind: Secret
metadata:
name: anchore-metrics-credentials
namespace: monitoring # Must be the namespace where Prometheus is installed
type: Opaque
data:
# The username to use for Basic Auth (usually 'admin' for Anchore metrics endpoint)
username: YWRtaW4= # Base64 of 'admin'
# IMPORTANT: Replace [YOUR_BASE64_OUTPUT] with the Base64-encoded version of
# the ANCHORE_ADMIN_PASSWORD you have configured in your deployment.
password: [YOUR_BASE64_OUTPUT]
It is imperative that users carefully review the commented lines within the code snippet and modify the configuration to align with their specific environment. Specifically the selectors for the namespace (matchNames) and labels (matchLabels) to match your deployment, as they define the scope of the pods the PodMonitor tracks. Any new pods subsequently created with matching labels in the specified namespaces will be automatically discovered and monitored by the PodMonitor based on this configuration.
For comprehensive details on the resource definition, refer to the official PodMonitor documentation here;
The specific strategy for monitoring services with prometheus is outside the scope of this document. But, because Anchore exposes metrics on the /metrics route of all service ports, it should be compatible with most monitoring approaches (daemon sets, side-cars, etc).
For info on deploying Prometheus in a docker-compose deployment see Deploy using Docker Compose
Metrics of Note
Anchore services export a range of metrics. The following list shows some Anchore services that can help you determine the health and load of an Anchore deployment.
- anchore_queue_length, specifically for queuename: “images_to_analyze”
- This is the number of images pending analysis, in the not_analyzed state.
- As this number grows you can expect longer analysis times.
- Adding more analyzers to a system can help drain the queue faster and keep wait times to a minimum.
- Example: anchore_queue_length{instance=“engine-simpleq:8228”,job=“anchore-simplequeue”,queuename=“images_to_analyze”}.
- This metric is exported from all simplequeue service instances, but is based on the database state, so they should all present a consistent view of the length of the queue.
- anchore_monitor_runtime_seconds_count
- These metrics, one for each monitor, record the duration of the async processes as they execute on a duty cycle.
- As the system grows, these will become longer to account for more tags to check for updates, repos to scan for new tags, and user notifications to process.
- anchore_tmpspace_available_bytes
- This metric tracks the available space in the “tmp_dir” location for each container. This is most important for the instances that are analyzers where this can indicate how much disk is being used for analysis and how much overhead there is for analyzing large images.
- This is expected to be consumed in cycles, with usage growing during analysis and then flushing upon completion. A consistent growth pattern here may indicate left over artifacts from analysis failures or a large layer_cache setting that is not yet full. The layer cache (see Layer Caching) is located in this space and thus will affect the metric.
- process_resident_memory_bytes
- This is the memory actually consumed by the instance, where each instance is a service process of Anchore. Anchore is fairly memory intensive for large images and in deployments with lots of analyzed images due to lots of json parsing and marshalling, so monitoring this metric will help inform capacity requirements for different components based on your specific workloads. Lots of variables affect memory usage, so while we give recommendations in the Capacity Planning document, there is no substitute for profiling and monitoring your usage carefully.