Object Storage
Note
Anchore Enterprise allows independent configuration of object storage drivers for both the Active Data Set and the Archive Data Set.Anchore Enterprise uses a PostgreSQL database by default to store structured data for images, tags, policies, subscriptions and metadata about images, but other types of data in the system are less structured and tend to be larger pieces of data. Because of that, there are benefits to supporting key-value access patterns for things like image manifests, analysis reports, and policy evaluations. For such data, Anchore has an internal object storage interface that, while defaulted to use the same Postgres database for storage, can be configured to use external object storage providers to support simpler capacity management and lower costs. The options are:
For the Active Data Set, configuration for the object store is set in the catalog’s object_store
service configuration in the config.yaml.
For the Archive Data Set, configuration for the object store is set in the catalog’s analysis_archive
configuration in the config.yaml. See Analysis Archive).
Migration instructions to move from the default provider to external object store can be found here.
Common Configurations
Single shared object store backend: set
object_store
details in config.yaml and omit the analysis_archive config from config.yaml, or set it to null or {}Different bucket/container: the object_store and analysis_archive configurations are both specified and identical with the exception of the bucket or container values for the analysis_archive so that its data is split into a different backend bucket to allow for lifecycle controls or cost optimization since its access is much less frequent (if ever).
Primary object store in DB, analysis_archive in external S3: this keeps latency low as no external service is needed for the object store and active data but lets you use more scalable external object storage for archive data. This approach is most beneficial if you can keep the working set of images small and quickly transition old analysis to the archive to ensure the db is kept small and the analysis archive handles the data scaling over time.