Core Concepts
This section covers the foundational concepts behind how Anchore Enterprise analyzes software, evaluates compliance, and manages security findings. Understanding these concepts helps you get the most out of the platform.
How Anchore Enterprise Works
Anchore Enterprise takes a data-driven approach to analysis and policy enforcement. The system processes each artifact through the following phases:
- Fetch the content (container image or source repository) and extract it, or import a pre-existing SBOM generated outside of Anchore Enterprise by another tool or AnchoreCTL.
- Analyze the content by running catalogers to extract and classify packages, dependencies, files, licenses, secrets, and other metadata into a comprehensive SBOM.
- Store the resulting SBOM and analysis data in the database for future use, audit, and continuous monitoring.
- Evaluate policies against the analysis result, including vulnerability matches on the artifacts discovered in the SBOM.
- Update vulnerability data and other external datasets on a recurring schedule, automatically re-evaluating stored SBOMs against new data.
- Notify users of changes to policy evaluations, vulnerability matches, and other system events.
Steps 5 and 6 repeat on intervals to ensure you always have the latest external data and updated evaluations, even for images analyzed weeks or months ago.
Key Concepts
SBOMs
A Software Bill of Materials (SBOM) is the machine-readable inventory of packages, files, licenses, and relationships extracted from an analyzed artifact. SBOMs are the foundational record Anchore Enterprise stores and re-evaluates over time — the same SBOM drives vulnerability matching, policy evaluation, reporting, VEX generation, base-image comparison, and drift detection. Container images are one kind of asset from which an SBOM can be derived; source repositories and pre-built SBOMs from other tools are also supported.
Learn more: SBOMs
Image Analysis
Anchore Enterprise analyzes container images by unpacking their layers and cataloging all software components into an SBOM. Analysis can happen centrally (the server pulls and unpacks the image) or in a distributed fashion (AnchoreCTL generates the SBOM locally and uploads it). The result is a stored SBOM that drives all subsequent vulnerability matching and policy evaluation.
Learn more: Images
Policies and Rule Sets
Policies define the compliance rules that Anchore Enterprise evaluates against analyzed artifacts. A policy contains one or more rule sets, each composed of gates (categories of checks) and triggers (specific conditions). Gates cover vulnerabilities, licenses, secrets, file permissions, metadata, and more. Evaluations produce pass, warn, or fail outcomes that can drive CI/CD decisions or admission control.
Learn more: Policies
Reporting
The Reporting engine lets teams query stored SBOMs, policy evaluations, and vulnerability matches across the fleet to answer questions like “which images are failing policy?” or “which Kubernetes namespaces are running containers with critical vulnerabilities?” Template-based reports can be run ad hoc or scheduled for continuous delivery via notification endpoints, and results are available in tabular, JSON, and CSV formats. Reporting is distinct from raw SBOM and vulnerability export — it is the platform’s internal query layer over its own data.
Learn more: Reporting
Vulnerability Remediation
Anchore Enterprise provides multiple mechanisms to manage and remediate vulnerability findings. VEX (Vulnerability Exploitability eXchange) annotations allow teams to document the exploitability status of specific vulnerabilities. Corrections can override CPE-based matching to reduce false positives. Reporting and policy-driven workflows help prioritize and track remediation efforts.
Learn more: Remediation
Data Management
Anchore Enterprise stores every analyzed artifact and its derived data — SBOMs, vulnerability matches, and policy evaluations — so the same record can be re-evaluated, audited, and compared over time. The analysis archive moves older or less-active analyses to cheaper object storage (such as S3) while retaining the ability to restore them, and artifact lifecycle policies automatically delete analysis data based on age, tag history, or runtime-inventory status. Together they give deployments a multi-tier retention model that keeps storage growth under control without losing data the organization actively uses.
Learn more: Data Management
Last modified April 22, 2026