Core Concepts
This section covers the foundational concepts behind how Anchore Enterprise analyzes software, evaluates compliance, and manages security findings. Understanding these concepts helps you get the most out of the platform.
How Anchore Enterprise Works
Anchore Enterprise takes a data-driven approach to analysis and policy enforcement. The system processes each artifact through the following phases:
- Fetch the content (container image or source repository) and extract it, or import a pre-existing SBOM generated outside of Anchore Enterprise by another tool or AnchoreCTL.
- Analyze the content by running catalogers to extract and classify packages, dependencies, files, licenses, secrets, and other metadata into a comprehensive SBOM.
- Store the resulting SBOM and analysis data in the database for future use, audit, and continuous monitoring.
- Evaluate policies against the analysis result, including vulnerability matches on the artifacts discovered in the SBOM.
- Update vulnerability data and other external datasets on a recurring schedule, automatically re-evaluating stored SBOMs against new data.
- Notify users of changes to policy evaluations, vulnerability matches, and other system events.
Steps 5 and 6 repeat on intervals to ensure you always have the latest external data and updated evaluations, even for images analyzed weeks or months ago.
Key Concepts
SBOMs
A Software Bill of Materials (SBOM) is the machine-readable inventory of packages, files, licenses, and relationships extracted from an analyzed artifact. SBOMs are the foundational record Anchore Enterprise stores and re-evaluates over time — the same SBOM drives vulnerability matching, policy evaluation, reporting, VEX generation, base-image comparison, and drift detection. Container images are one kind of asset from which an SBOM can be derived; source repositories and pre-built SBOMs from other tools are also supported.
Learn more: SBOMs
Image Analysis
Anchore Enterprise analyzes container images by unpacking their layers and cataloging all software components into an SBOM. Analysis can happen centrally (the server pulls and unpacks the image) or in a distributed fashion (AnchoreCTL generates the SBOM locally and uploads it). The result is a stored SBOM that drives all subsequent vulnerability matching and policy evaluation.
Learn more: Images
Policies and Rule Sets
Policies define the compliance rules that Anchore Enterprise evaluates against analyzed artifacts. A policy contains one or more rule sets, each composed of gates (categories of checks) and triggers (specific conditions). Gates cover vulnerabilities, licenses, secrets, file permissions, metadata, and more. Evaluations produce pass, warn, or fail outcomes that can drive CI/CD decisions or admission control.
Learn more: Policies
Reporting
The Reporting engine lets teams query stored SBOMs, policy evaluations, and vulnerability matches across the fleet to answer questions like “which images are failing policy?” or “which Kubernetes namespaces are running containers with critical vulnerabilities?” Template-based reports can be run ad hoc or scheduled for continuous delivery via notification endpoints, and results are available in tabular, JSON, and CSV formats. Reporting is distinct from raw SBOM and vulnerability export — it is the platform’s internal query layer over its own data.
Learn more: Reporting
Anchore Enterprise provides multiple mechanisms to manage and remediate vulnerability findings. VEX (Vulnerability Exploitability eXchange) annotations allow teams to document the exploitability status of specific vulnerabilities. Corrections can override CPE-based matching to reduce false positives. Reporting and policy-driven workflows help prioritize and track remediation efforts.
Learn more: Remediation
Data Management
Anchore Enterprise stores every analyzed artifact and its derived data — SBOMs, vulnerability matches, and policy evaluations — so the same record can be re-evaluated, audited, and compared over time. The analysis archive moves older or less-active analyses to cheaper object storage (such as S3) while retaining the ability to restore them, and artifact lifecycle policies automatically delete analysis data based on age, tag history, or runtime-inventory status. Together they give deployments a multi-tier retention model that keeps storage growth under control without losing data the organization actively uses.
Learn more: Data Management
1 - SBOMs
A Software Bill of Materials (SBOM) is a structured, machine-readable inventory of the components that make up a piece of software — operating-system packages, language-ecosystem libraries, files, licenses, and the relationships between them. The common analogy is a nutrition label: an explicit declaration of the “ingredients” inside a piece of software, from which consumers can answer “what’s actually in here?” without needing to crack it open themselves.
In Anchore Enterprise, the SBOM is the foundation. It is not a downstream artifact produced as a side-effect of scanning; it is the central record that every other capability depends on. Container images are one kind of asset from which an SBOM can be derived, and the primary focus of most deployments — but source repositories, filesystem artifacts, and externally supplied SBOMs are also first-class inputs. Once an SBOM is stored, the platform re-evaluates, exports, matches, and compares it over time.
Why SBOMs Matter
SBOMs moved from a nice-to-have to a board-level requirement over a short period of time. Three drivers converged:
- Regulatory mandates — U.S. Executive Order 14028 (2021) required SBOMs for software delivered to federal agencies, and subsequent guidance from NIST (SP 800-53, SP 800-218 / SSDF), FedRAMP, and the EU Cyber Resilience Act extended the expectation into broader commercial procurement.
- High-profile supply-chain incidents — SolarWinds, Codecov, and the Log4Shell vulnerability in Log4j each demonstrated that organizations could not quickly answer a basic question: “do we use this component, and where?” An SBOM inventory turns that question from a weeks-long audit into a database lookup.
- Interoperability standards — the NTIA’s Minimum Elements for a Software Bill of Materials (2021) established a baseline — component name, supplier, version, unique identifier, dependency relationships, SBOM author, and timestamp — that modern SBOM formats satisfy. This baseline makes SBOMs exchangeable across organizations and tools rather than stuck in vendor-specific representations.
Anchore Enterprise is designed around these drivers: SBOMs are stored as first-class records, exchanged in open formats, and reused for as many security, compliance, and audit questions as possible.
What an SBOM Captures
Anchore Enterprise extracts a single SBOM per analyzed artifact. The SBOM records the material that downstream scanning and policy decisions need:
- Packages — operating-system packages (dpkg, rpm, apk, etc.) and language-ecosystem packages (Java, Python, Node.js, Go, Ruby, .NET, and others), each with name, version, license, and package-type metadata.
- Files — a file inventory with coordinates and optional content hashes, used for checks that depend on file layout as well as language-ecosystem package discovery.
- Distribution metadata — the detected Linux distribution and release for container images, which determines which vulnerability feeds are consulted.
- Source metadata — identifying information for the analyzed artifact: image reference, repository digests, architecture, OS, and, for container images, the raw manifest and config.
- Relationships — the dependency graph connecting packages to the files and parents that declared them, used for precise vulnerability localization.
Several related content types travel alongside the SBOM rather than inside it — secret-scan results, content-search results, retrieved file contents, image manifest, parent manifest, and Dockerfile. These are stored as separate content records tied to the same image record, so they can be queried independently without bloating the SBOM itself.
How SBOMs Are Produced and Exchanged
Anchore Enterprise treats SBOMs as a two-way flow: the platform generates SBOMs for the software an organization produces, and also consumes SBOMs the organization receives from its suppliers.
SBOMs land in Anchore Enterprise from three distinct sources, all of which participate in the same downstream flows once stored:
- Centralized analysis — the Anchore Enterprise deployment pulls an image from a registry, unpacks it, and generates the SBOM server-side.
- Distributed analysis — AnchoreCTL pulls the image (or source tree) locally, runs Syft to generate the SBOM client-side, and uploads the result. Source content never leaves the client. Syft is Anchore’s open-source SBOM generator, and Anchore Enterprise builds on that foundation to store, evaluate, and enforce policy against SBOMs at scale. See Images for when to pick which mode.
- External import (“Bring Your Own SBOM”) — an SBOM produced by another tool or vendor is uploaded directly, without requiring the underlying artifact. This is how procurement teams ingest supplier SBOMs, how M&A due-diligence and third-party audits are brought into the same analysis pipeline, and how components that were never built by Anchore Enterprise get vulnerability and license visibility.
Internally, Anchore Enterprise stores SBOMs in the Syft native JSON format. At the edges of the system, the two dominant open standards are supported for both import and export:
- CycloneDX (OWASP) — imported and exported in JSON and XML.
- SPDX (Linux Foundation) — imported and exported in JSON and tag-value formats.
- Syft — the native internal format; produced by AnchoreCTL distributed analysis and by Anchore Enterprise centralized analysis.
Both CycloneDX and SPDX satisfy the NTIA minimum-elements baseline, which is why an SBOM generated once can be re-emitted in whichever format a downstream consumer requires — auditor, customer, or another security tool. For the exact schema versions supported for upload and download, see SBOM Management.
How Anchore Enterprise Uses the SBOM
Storing the SBOM means every downstream capability can be derived from the same canonical record, without re-analyzing the artifact. The same SBOM answers a range of questions well beyond “what CVEs apply”:
- Vulnerability matching — the packages in the SBOM are matched against Anchore Enterprise’s consolidated vulnerability data on a recurring schedule, so newly disclosed CVEs surface automatically on previously analyzed software. For how matching is performed, see How It Works.
- License compliance — the SBOM records per-package license metadata, which policy rules use to enforce allow-lists, denylists, or obligations (for example, flagging GPL-licensed components in proprietary releases).
- Policy evaluation — rule sets evaluate the SBOM’s packages, files, licenses, metadata, and Dockerfile instructions to produce a pass/warn/fail outcome. See Policy.
- Reporting, audit, and procurement responses — stored SBOMs feed scheduled and ad-hoc reports and can be re-emitted as CycloneDX or SPDX for auditors, customers, regulators, or downstream tooling. This is the primary mechanism for responding to customer or procurement requests that require an SBOM alongside a delivered product.
- VEX generation — vulnerability annotations applied on the SBOM’s findings are emitted as OpenVEX or CycloneDX VEX documents, combining the SBOM with its exploitability statements. See Remediation.
- Base-image comparison — because both an image’s SBOM and its base image’s SBOM are stored, findings can be partitioned into “inherited from base” versus “introduced by this image”. See Compare Against a Base Image.
- SBOM drift detection — successive SBOMs for the same image can be diffed to detect added, removed, or changed components, and the drift result can drive policy to catch unauthorized modifications, developer error, or supply-chain attacks. See SBOM Drift.
- Component provenance and trend analysis — stored historical SBOMs let teams trace when a specific component entered the codebase, identify every release that carries a given library version, and track the evolution of the software’s composition over time.
- Application grouping — individual SBOMs can be grouped into higher-level applications that reflect the way an organization actually delivers software, making batch reporting and policy management possible across related artifacts.
Because the SBOM is persistent and the data sources around it (vulnerability feeds, policy definitions, annotations) change over time, a single analysis remains useful for the full life of the software it describes.
2 - Analyzing Images
When an image is submitted to Anchore Enterprise, the deployment retrieves the image manifest from the configured registry and hands it off to an analyzer worker. Analyzer workers run in parallel and pull from a shared queue, so scaling throughput is a matter of adding more analyzer replicas.
During analysis, a set of catalogers inspects every package, software library, and file, producing a comprehensive SBOM. The SBOM is persisted to the database and drives all downstream vulnerability matching and policy evaluation.
Centralized and Distributed Analysis
Anchore Enterprise supports two analysis models that differ only in where SBOM generation happens:
- Centralized analysis — the Anchore Enterprise deployment pulls the image from the registry, unpacks it, and generates the SBOM server-side.
- Distributed analysis — AnchoreCTL pulls the image locally, generates the SBOM on the client, and uploads the result. Source image content never leaves the client machine.
sequenceDiagram
participant A as AnchoreCTL
participant R as Registry
participant E as Anchore Enterprise
A->>E: Request image analysis
E->>R: Get image content
R-->>E: Image content
E->>E: Generate SBOM, secret scan, etc.
E->>E: Scan SBOM for vulns and evaluate complianceCentralized analysis — all image content is handled by the deployment.
sequenceDiagram
participant A as AnchoreCTL
participant R as Registry/Docker Daemon
participant E as Anchore Enterprise
A->>R: Get image content
R-->>A: Image content
A->>A: Generate SBOM, secret scan, etc.
A->>E: Import SBOM, secret search, fs metadata
E->>E: Scan SBOM for vulns and evaluate complianceDistributed analysis — image content stays on the client; only the SBOM is uploaded.
Representative commands:
# Centralized — the deployment pulls and analyzes
anchorectl image add docker.io/library/nginx:latest
# Distributed — anchorectl pulls from the registry and analyzes locally
anchorectl image add docker.io/library/nginx:latest --from registry
A stateless variant of distributed analysis is available via anchorectl image one-time-scan. It runs the same client-side flow and returns policy and vulnerability results to the caller, but the SBOM is never persisted in Anchore Enterprise — useful in CI pipelines that want fast pass/fail feedback without growing the deployment’s SBOM history. See One-Time Scan for details.
Analysis State
Analysis is asynchronous. Workers poll an internal queue and update the image’s analysis_status as processing progresses.
stateDiagram
[*] --> not_analyzed: analysis queued
not_analyzed --> analyzing: analyzer starts processing
analyzing --> analyzed: analysis completed successfully
analyzing --> analysis_failed: analysis fails
analyzing --> not_analyzed: re-queue by timeout or analyzer shutdown
analysis_failed --> not_analyzed: re-queued by user request
analyzed --> not_analyzed: re-queued for re-processing by user requestMonitor Images for Updates
Anchore Enterprise watches the external world on your behalf:
- Repository updates — new tags appearing on a watched repository are automatically added as subscriptions.
- Tag updates — when the image digest a tag points to changes, the new image is pulled and re-analyzed.
Both checks are driven by the Catalog component on a configurable duty cycle. To manage subscriptions and tune cycle intervals, see Subscriptions.
Base and Parent Images
Container images are built on top of other images via the FROM clause. Docker calls the referenced image the parent image; the broader community uses base image for the same concept, and Anchore Enterprise follows the latter convention. A chain of images related via FROM is an image’s ancestry.
Docker itself defines a base image as an image declared with FROM scratch. Anchore Enterprise does not use that definition — throughout these docs, base image means the image a given image was built from.
Ancestry is reconstructed automatically by comparing layer digests: image B is a descendant of image A only if every layer of A is present in B. No configuration is required — Anchore Enterprise computes ancestry as new images are analyzed.
graph LR
debian:10-->|parent of|mynode:latest
mynode:latest-->|parent of|myapp:v1
Choose the Base Image
By default, the base image is the closest ancestor — mynode:latest for myapp:v1 above. You can override the default by marking a specific ancestor with an annotation, useful for designating a platform team’s “golden image” as the base rather than the immediate parent. The selection rules are:
graph TD
start([start])-->image[image]
image-->first_parent_exists{Does this image have a parent?}
first_parent_exists-->|No|no_base_image
first_parent_exists-->|Yes|first_parent_image[Parent image]
first_parent_image-->config{User base annotations enabled?}
config-->|No|base_image
config-->|Yes|check_parent{Parent has anchore.user/marked_base_image: true?}
check_parent-->|No|parent_exists{Does the parent have a parent?}
parent_exists-->|Yes|parent_image[/Move to next parent image/]
parent_image-->check_parent
parent_exists-->|No|no_base_image
check_parent-->|Yes|base_image
base_image([Found base image])
no_base_image([No base image exists])The base image filters inherited findings out of policy evaluations and vulnerability scans so developers can focus on issues introduced by their own changes. For the full base-comparison feature, see Compare Against a Base Image.
For the annotation syntax, AnchoreCTL command, and the configuration flag that enables user-marked base images, see Base Image Annotations.
Malware Scanning
During centralized analysis, Anchore Enterprise runs a ClamAV-based malware scan over the image content. Findings are exposed on the image record and can be acted on via the Malware Policy Gate. For enabling scanning, tuning signature-database refresh behavior, and handling images larger than 2 GB, see Scanning Configuration.
3 - Policy
Policy is how Anchore Enterprise turns findings into decisions. Once an artifact has been analyzed into an SBOM, a policy evaluates that SBOM — together with its vulnerability matches, metadata, and file content — and produces a pass-or-fail verdict that a CI pipeline, admission controller, or reviewer can act on. Policies are authored as JSON documents and managed through the Enterprise UI, AnchoreCTL, or the API; they can be evaluated against an artifact on demand, or continuously as vulnerability data and SBOM contents change over time.
Components of a Policy
A policy is a JSON document (a policy bundle) composed of four kinds of configuration, each with its own editor and API surface:
- Rule sets — named collections of rules that define the checks to run. See Rule Sets, Gates, and Triggers below.
- Mappings — ordered selection rules that pick which rule sets and allowlists apply to a given artifact. The first mapping that matches wins. See Policy Mappings.
- Allowlists — per-trigger exclusions that downgrade a specific finding (for example, a known false-positive CVE) to
go, so it no longer contributes to a failing evaluation. See Allowlists. - Allowlisted and denylisted images — deployment-wide overrides that force an image’s final result to pass or fail regardless of the rest of the evaluation. See Allowed / Denied Images.
For the complete JSON shape of a policy bundle, see Policy Bundle JSON Structure.
Rule Sets, Gates, and Triggers
A rule set is a named collection of rules. Each rule specifies four things:
- A gate — the category of check (for example,
vulnerabilities, metadata, dockerfile, secrets, files). - A trigger — the specific condition within that gate that should match (for example, a vulnerability at or above a severity threshold).
- A set of parameters — gate- and trigger-specific inputs that refine the match.
- An action —
stop, warn, or go — emitted whenever the trigger fires.
Gates provide a logical namespace for triggers, so the same trigger name can exist in different gates without collision. A trigger can fire many times within a single evaluation — one match per detected instance — and each firing produces a finding that carries a unique trigger ID, the rule’s action, and a human-readable message. Findings are the audit-trail unit: every rule that matches contributes one or more findings to the evaluation result.
For the full catalog of gates and their triggers, see Policy Checks.
Actions and the Final Action
Each rule declares a per-rule action — stop, warn, or go. When the rule’s trigger matches, that action is recorded alongside the finding. Once all rules have run, Anchore Enterprise aggregates every recorded action into a single final action for the evaluation:
- stop — at least one finding carried action
stop (and was not downgraded by an allowlist). The artifact fails evaluation. - warn — no
stop actions survived, but at least one warn did. The artifact passes with a warning. - go — no
stop or warn actions. The artifact passes cleanly.
go findings never affect the outcome on their own, but they are still recorded in the evaluation response so historical audits can see that a check ran and produced no issue.
For the per-finding field reference and an example findings payload, see Evaluation Findings.
Policy Evaluation Flow
A full evaluation takes two slightly different paths depending on the artifact type. Container images go through image-based mapping, every supported gate, and the deployment-wide allowlisted/denylisted image overrides. Imported SBOMs go through SBOM-based mapping and evaluate only the vulnerabilities gate; they are not subject to image-level overrides. Both paths converge on the same action-aggregation logic and the same pass/warn/fail terminal states.
flowchart TD
start([Policy evaluation requested]) --> artifact{Artifact type?}
artifact -->|Container image| imap[Select image mapping<br/>registry, repository, tag]
artifact -->|Imported SBOM| smap[Select SBOM mapping<br/>name and version]
imap --> irs[Evaluate mapped rule sets<br/>across all gates]
smap --> srs[Evaluate mapped rule sets<br/>vulnerabilities gate only]
irs --> itrig[Triggers fire<br/>each emits stop, warn, or go]
srs --> strig[Triggers fire<br/>each emits stop, warn, or go]
itrig --> iallow[Apply mapped allowlists<br/>matching trigger IDs become go]
strig --> sallow[Apply mapped allowlists<br/>matching trigger IDs become go]
iallow --> deny{Image in<br/>denylisted_images?}
deny -->|Yes| fail([Final action: fail])
deny -->|No| allow{Image in<br/>allowlisted_images?}
allow -->|Yes| pass([Final action: pass])
allow -->|No| agg{Aggregated<br/>action across<br/>all findings}
sallow --> agg
agg -->|stop| fail
agg -->|warn| pass_warn([Final action: pass with warning])
agg -->|go| passAt every step the finding-by-finding output is preserved in the evaluation response, so reviewers can see not just whether an artifact passed but why — every trigger that fired, every allowlist that matched, and every image-level override is recorded alongside the final action.
Pass and Fail
A policy evaluation resolves to one of two terminal states:
- Pass — the artifact may proceed. Pass covers both “no findings” (
go) and “findings with warnings, but nothing blocking” (warn). Warning findings are recorded for audit purposes but do not block. - Fail — the artifact is blocked. Either an image-level denylist matched (container images only), or at least one rule fired with action
stop that was not downgraded by an allowlist.
Because pass/fail is a deterministic function of the policy bundle, the SBOM, and the available vulnerability data, the same evaluation can be re-run later — as vulnerability feeds update or as the policy itself evolves — and the verdict may change without the artifact itself changing. Continuous re-evaluation is how Anchore Enterprise surfaces newly disclosed vulnerabilities against images that were analyzed weeks or months ago.
Further Reading
- Policy Checks — the full catalog of gates, triggers, and parameters.
- Managing Policies — authoring, uploading, downloading, and activating policy bundles, including the JSON bundle reference.
- Testing Policies — previewing evaluations and inspecting detailed findings output.
4 - Reporting
Analyzing software produces a continuous stream of data — SBOMs, vulnerability matches, policy evaluations, runtime inventory snapshots — and none of it is actionable until someone can ask the right question against it. Anchore Enterprise includes a reporting engine, driven by the Enterprise Reporting Service, that lets teams query the platform’s own data in structured, repeatable ways to answer questions like “which images are failing policy?”, “which tags are affected by a specific CVE?”, or “which Kubernetes namespaces are running containers with critical vulnerabilities?”
Reporting is distinct from the platform’s export mechanisms. Exports emit SBOMs (SPDX, CycloneDX, Syft), vulnerability disclosure documents, and VEX statements for consumption by downstream tools, auditors, or customers. Reporting, by contrast, is the platform’s internal query layer — it takes stored SBOMs, policy evaluations, and vulnerability matches and produces structured results for the team operating Anchore Enterprise itself.
Templates, Filters, and Columns
Every report is generated from a template that defines the filters a user can apply and the columns that appear in the output. Templates come in two varieties:
- System templates — shipped with Anchore Enterprise and maintained by the platform.
- User templates — copies of system templates (or of other user templates) with filters, default values, and column layouts adjusted for a particular team’s workflow. User templates can be edited or deleted; system templates cannot.
Anchore Enterprise ships a set of system templates that cover the most common questions, organized around three themes:
- Policy compliance — Images Failing Policy Evaluation and Policy Compliance History by Tag identify non-compliant images and track compliance movement over time.
- Vulnerability discovery — Images With Critical Vulnerabilities, Artifacts by Vulnerability, Tags by Vulnerability, and Images Affected by Vulnerability locate vulnerable software and the specific images it appears in, allowing queries that start from either the image or the CVE.
- Runtime pivots — Vulnerabilities by Kubernetes Namespace, Vulnerabilities by Kubernetes Container, and Vulnerabilities by ECS Container cross fleet-wide findings with runtime inventory, so security teams can focus on workloads that are actually running.
Ad-Hoc and Scheduled Execution
Reports can be run in two modes:
- Ad hoc — a user selects a template, applies filters, previews the results, and downloads the output. Appropriate for one-off questions and exploratory analysis.
- Scheduled — a saved report is configured to run daily, weekly, or monthly on a chosen day and time. Scheduled runs feed the results into the platform’s notification system, so the team receives the report through the same channels they use for other Anchore Enterprise alerts (email, Slack, Microsoft Teams, Jira, GitHub, or webhook).
Scheduled reporting is what turns point-in-time answers into continuous awareness. A weekly “Images With Critical Vulnerabilities” report, delivered to a security channel every Monday morning, makes remediation work visible before it becomes urgent.
Report results are available in several formats to match how they are consumed:
- Tabular — an in-browser view for interactive review.
- JSON — machine-readable output for ingestion into other tools or dashboards.
- CSV — spreadsheet-friendly output for manual analysis, sharing, or archive.
The CSV format in particular is useful when the underlying result set is large: the UI may truncate a long list for display, but the CSV download contains the full filtered record set.
Where Reporting Fits
Because reporting reads from the same stored SBOMs and evaluations that every other part of Anchore Enterprise writes to, it supports several distinct workflows with the same underlying engine:
- Finding what to remediate — the primary discovery input for Remediation. Before a team can triage, annotate, or fix a finding, they have to find it across a fleet.
- Audit and compliance evidence — scheduled reports produce a time-stamped record of policy evaluations and vulnerability exposure, suitable for regulatory and customer audits.
- Stakeholder communication — engineering, security, and executive audiences each want a different slice of the same data. User templates make it practical to produce a tailored view for each without maintaining parallel tooling.
For how to author templates, create and schedule reports, and manage report results, see Reporting and Remediation. For tuning the Reporting Service itself, see Reporting Service configuration.
5 - Remediation
Anchore Enterprise produces a continuous stream of findings against your images — enumerated vulnerabilities discovered in the SBOMs extracted from container images, plus policy failures produced during compliance evaluation. Teams typically learn that a finding needs attention through the Reporting engine (for example, a scheduled “Images With Critical Vulnerabilities” report landing in a team channel, or a “Policy Compliance History by Tag” report showing a newly failing tag), or through direct Notifications when a tag’s vulnerability list or policy-evaluation result changes. Either path surfaces the same underlying data; the difference is whether the team pulls it via a query or is pushed to it via an event.
Knowing about a finding is only the first step — remediation is how teams triage, document, and close out those findings. Anchore Enterprise supports three complementary remediation mechanisms, each suited to a different kind of response.
VEX Annotations on Vulnerability Findings
A VEX annotation (Vulnerability Exploitability eXchange) is a per-vulnerability statement declaring whether a given CVE is actually exploitable in the context of a particular image. Every vulnerability enumerated in an SBOM can be annotated individually, and annotated vulnerabilities are filtered out of reports and dashboards by default — so teams stop re-triaging the same non-exploitable findings on every scan.
Each annotation carries:
- A status —
Not Affected, Affected, Fixed, or Under Investigation. - A justification (for
Not Affected) — for example, Vulnerable Code Not in Execute Path or Inline Mitigation Already Exists. - Free-form impact, action, and additional-detail statements that explain the decision to downstream consumers.
Annotations can be exported as machine-readable VEX documents in OpenVEX or CycloneDX format and shared with customers, auditors, or other stakeholders. They can also drive policy decisions: the vulnerabilities gate accepts filters such as missing annotation and annotation status to enforce an annotation discipline.
For the full status and justification vocabulary, RBAC requirements, and export formats, see Vulnerability Annotations and VEX.
Action Workbench for Policy Failures
When an image fails a policy evaluation, the Action Workbench in the Enterprise UI is where teams plan, assign, and communicate the response. Action Workbench lives on the image’s Artifact Analysis view and surfaces two capabilities:
- Action plans — a structured grouping of resolutions for the specific policy failures and vulnerabilities on an image. Each resolution associates a remediation message with one or more trigger IDs from the policy evaluation, so the context of the failure travels with the remediation.
- Notification delivery — an action plan can be dispatched to various destinations such as email, chat tools, and issue trackers through a preconfigured notification endpoint. This makes the workbench the natural bridge from “Anchore Enterprise flagged this” to “the team responsible has been told what to do.” For the canonical list of destinations and their setup, see Supported Endpoints.
Action plans are also available via the API, which lets CI jobs or custom integrations generate them programmatically.
For the action-plan payload, supported types, and permission requirements, see Reporting and Remediation.
Corrections for False-Positive Matches
Not every finding is an exploitability question — some are simply wrong. A vulnerability match is only as accurate as the identifiers Anchore Enterprise attaches to each package, and those identifiers (primarily Package URLs and CPEs) are synthesized from the metadata available at analysis time. That metadata is often incomplete or ambiguous:
- Java/Maven packages frequently omit the canonical
groupId and artifactId from their JAR manifests, so the analyzer has to guess a purl from partial information. - Multi-valued version fields in JAR manifests mean the analyzer must pick a “best” version, which may not be the one the vulnerability feed keyed against.
- CPE candidates are synthesized heuristically from vendor, product, and version metadata, and the synthesized vendor/product string does not always agree with what downstream feeds (NVD, GHSA, vendor feeds) use to describe the same component.
When any of these guesses drifts from the identifier the vulnerability feed actually uses, the package is matched against the wrong vulnerability records — a finding that looks real but does not reflect the software actually installed.
A correction overrides the extracted metadata (CPEs, Package URLs, package name, and related fields) at scan time so subsequent evaluations match against the right identifiers. Corrections are the appropriate tool when the match itself is wrong, as opposed to correct-but-not-exploitable (a VEX case).
For the correction format, supported package types, and worked examples, see Corrections.
6 - Data Management
Anchore Enterprise is a data-intensive system. Every analyzed artifact produces records — SBOMs, vulnerability matches, policy evaluations, annotations — that continue to pay returns long after the initial analysis: re-evaluation against new vulnerability feeds, audit trails, base-image comparison, and drift detection all rely on stored history. But that history grows without bound as new artifacts are analyzed, and large deployments need a way to control storage consumption without losing the data that is actually in use.
Anchore Enterprise provides two complementary mechanisms for managing analysis data over its life: the analysis archive, which moves old records to cheaper storage while keeping them restorable, and artifact lifecycle policies, which delete old records outright.
Working Set and Archive Set
Analysis data lives in one of two sets at any given time:

- Working set — analyses in the
analyzed state, fully available for policy evaluation, content queries, feed-driven re-evaluation, and vulnerability updates. This is where the platform operates day-to-day. - Archive set — point-in-time snapshots held in (optionally separate) object storage. Archived analyses preserve all annotations, tags, metadata, and policy history, consume minimal database space, and are not updated with new vulnerability feeds unless restored. The archive set is designed for long-term retention at low cost.
An analysis may exist in both sets simultaneously — the archive is not exclusive. An archived analysis can be restored to the working set at any time without re-downloading or re-analyzing the original artifact, because the archive captures everything needed to rehydrate the record.
Automatic Archiving
Anchore Enterprise supports archive rules that automatically move analyses from the working set to the archive based on criteria like analysis age, tag history depth, or runtime-inventory last-seen date. Rules can be scoped to an account or made system-global, and they run on a recurring catalog duty cycle. When an archive rule matches, the analysis is moved — the archive copy is created and the working-set copy removed — keeping the working set focused on artifacts the organization actively cares about.
The same rule framework also supports a delete transition that operates on the archive set, purging old archived analyses entirely. For the rule fields, JSON structure, and CLI management, see Analysis Archive.
Artifact Lifecycle Policies
Artifact lifecycle policies are a stricter, delete-only counterpart to archive rules. They evaluate a set of criteria — analysis age, tag history, runtime-inventory status — and when a matching image or imported SBOM is found, the policy permanently deletes the record. There is no archive step; the goal is simply to keep the deployment lean.
Artifact lifecycle policies are system-global by design and can be administered only by system administrators. They apply across every account and execute on a scheduled catalog cycle. They are the right tool when retention is not required — for example, short-lived scratch accounts that never need historical lookups, or compliance regimes that actively prefer data minimization.
For the policy fields, supported actions, and configuration specifics, see Artifact Lifecycle Policies.
Choose Between Archive and Delete
The two mechanisms answer different questions:
- If you need the data later — for audit, customer lookup, historical comparison, or slow-moving compliance review — use archive rules to move it out of the hot working set while keeping it recoverable.
- If you do not need the data later — flush old, low-value analyses to keep the database lean — use artifact lifecycle policies to delete outright.
Many deployments combine the two: archive rules move analyses older than N days to cheaper storage, and artifact lifecycle policies delete records older than M months. This yields a multi-tier retention model — hot working set, cheap archive tier, eventual purge — without requiring anyone to touch the data manually.
For the storage backend that backs the archive tier (Postgres by default, or an S3-compatible object store for scale and cost), see Object Store: Analysis Archive.