Data Management

Anchore Enterprise is a data-intensive system. Every analyzed artifact produces records — SBOMs, vulnerability matches, policy evaluations, annotations — that continue to pay returns long after the initial analysis: re-evaluation against new vulnerability feeds, audit trails, base-image comparison, and drift detection all rely on stored history. But that history grows without bound as new artifacts are analyzed, and large deployments need a way to control storage consumption without losing the data that is actually in use.

Anchore Enterprise provides two complementary mechanisms for managing analysis data over its life: the analysis archive, which moves old records to cheaper storage while keeping them restorable, and artifact lifecycle policies, which delete old records outright.

Working Set and Archive Set

Analysis data lives in one of two sets at any given time:

Working-set and archive-set image analyses

  • Working set — analyses in the analyzed state, fully available for policy evaluation, content queries, feed-driven re-evaluation, and vulnerability updates. This is where the platform operates day-to-day.
  • Archive set — point-in-time snapshots held in (optionally separate) object storage. Archived analyses preserve all annotations, tags, metadata, and policy history, consume minimal database space, and are not updated with new vulnerability feeds unless restored. The archive set is designed for long-term retention at low cost.

An analysis may exist in both sets simultaneously — the archive is not exclusive. An archived analysis can be restored to the working set at any time without re-downloading or re-analyzing the original artifact, because the archive captures everything needed to rehydrate the record.

Automatic Archiving

Anchore Enterprise supports archive rules that automatically move analyses from the working set to the archive based on criteria like analysis age, tag history depth, or runtime-inventory last-seen date. Rules can be scoped to an account or made system-global, and they run on a recurring catalog duty cycle. When an archive rule matches, the analysis is moved — the archive copy is created and the working-set copy removed — keeping the working set focused on artifacts the organization actively cares about.

The same rule framework also supports a delete transition that operates on the archive set, purging old archived analyses entirely. For the rule fields, JSON structure, and CLI management, see Analysis Archive.

Artifact Lifecycle Policies

Artifact lifecycle policies are a stricter, delete-only counterpart to archive rules. They evaluate a set of criteria — analysis age, tag history, runtime-inventory status — and when a matching image or imported SBOM is found, the policy permanently deletes the record. There is no archive step; the goal is simply to keep the deployment lean.

Artifact lifecycle policies are system-global by design and can be administered only by system administrators. They apply across every account and execute on a scheduled catalog cycle. They are the right tool when retention is not required — for example, short-lived scratch accounts that never need historical lookups, or compliance regimes that actively prefer data minimization.

For the policy fields, supported actions, and configuration specifics, see Artifact Lifecycle Policies.


Choose Between Archive and Delete

The two mechanisms answer different questions:

  • If you need the data later — for audit, customer lookup, historical comparison, or slow-moving compliance review — use archive rules to move it out of the hot working set while keeping it recoverable.
  • If you do not need the data later — flush old, low-value analyses to keep the database lean — use artifact lifecycle policies to delete outright.

Many deployments combine the two: archive rules move analyses older than N days to cheaper storage, and artifact lifecycle policies delete records older than M months. This yields a multi-tier retention model — hot working set, cheap archive tier, eventual purge — without requiring anyone to touch the data manually.

For the storage backend that backs the archive tier (Postgres by default, or an S3-compatible object store for scale and cost), see Object Store: Analysis Archive.

Last modified April 22, 2026