1 - Database Driver
The default object store driver is the PostgreSQL database driver which stores all object store documents within the PostgreSQL database.
A component of the object store driver is the archive_document. When the default object store driver is used, as opposed to a user configuring a S3 bucket, this is the location where image SBOMs, vulnerability scans, policy evaluations, and reports are stored.
Compression is not supported for this driver since the underlying database will handle compression.
There are no configuration options required for the Database driver.
The embedded configuration for anchore enterprise includes the default configuration for the db driver.
object_store:
compression:
enabled: False
min_size_kbytes: 100
storage_driver:
name: db
config: {}
2 - Analysis Archive Storage Configuration
For information on what the analysis archive is and how it works, see Concepts: Analysis Archive
The Analysis Archive is an object store with specific semantics and thus is configured as an object store using the same configuration options as object_store
which is used for the active working set of images, just with a different config key: analysis_archive
Amazon S3 Example
Anchore strongly recommends using IAM roles for secure access to Amazon S3.
Example configuration snippet for using the DB for working set object store and Amazon S3 for the analysis archive:
...
services:
...
catalog:
...
object_store:
compression:
enabled: false
min_size_kbytes: 100
storage_driver:
name: db
config: {}
analysis_archive:
compression:
enabled: False
min_size_kbytes: 100
storage_driver:
name: 's3'
config:
iamauto: True
region: <AWS_REGION_HERE>
bucket: 'anchorearchive'
create_bucket: True
S3-Compatible Example
Example configuration snippet for using the DB for working set object store and S3-API compatible object storage for the analysis archive:
...
services:
...
catalog:
...
object_store:
compression:
enabled: false
min_size_kbytes: 100
storage_driver:
name: db
config: {}
analysis_archive:
compression:
enabled: False
min_size_kbytes: 100
storage_driver:
name: 's3'
config:
access_key: 'MY_ACCESS_KEY'
secret_key: 'MY_SECRET_KEY'
url: 'https://my-s3-compatible-endpoint.example.com:optional_port'
region: False
bucket: 'anchorearchive'
create_bucket: True
Default Configuration
By default, if no analysis_archive
config is found or the property is not present in the config.yaml, the analysis archive
will use the object_store
or archive
(for backwards compatibility) config sections and those defaults (e.g. db if found).
Anchore stores all of the analysis archive objects in an internal logical bucket: analysis_archive that is distinct in
the configured backends (e.g a key prefix in the s3 bucket)
Changing Configuration
Unless there are image analyses actually in the archive, there is no data to move if you need to update the configuration
to use a different backend, but once an image analysis has been archived to update the configuration you must follow
the object storage data migration process found here. As noted in that guide, if you need
to migrate to/from an analysis_archive
config you’ll need to use the –from-analysis-archive/–to-analysis-archive
options as needed to tell the migration process which configuration to use in the source and destination config files
used for the migration.
3 - Amazon S3
This page describes configuration when using Amazon S3 for object storage with IAM role authentication.
Anchore strongly recommends using IAM roles for secure access to Amazon S3.
IAM Role Authentication
For Anchore to use an AWS IAM role, the environment it runs in (such as an EC2 instance, ECS task, or Kubernetes pod) must have an AWS IAM role with the necessary S3 bucket permissions:
"Action": [
"s3:PutObject*",
"s3:GetObject*",
"s3:DeleteObject*",
],
In your values.yaml
file storage_driver section, set the iamauto parameter to true:
services:
catalog:
archive:
storage_driver:
name: 's3'
config:
iamauto: true
With iamauto: true
, Anchore automatically adopts the IAM role of its host environment. This is the most secure method for granting Amazon S3 access as it removes the need to store credentials such as ACCESS_KEY
and SECRET_KEY
in configuration files.
Other S3 Configuration Options
Below are other configurable parameters for the Anchore S3 driver:
The Anchore S3 driver supports document compression to reduce storage space. Set to true
to enable or false to disable and
min_size_kbytes
sets the minimum document size in kilobytes to be compressed.
config:
...
compression:
enabled: true
min_size_kbytes: 1
region
- the AWS region of your Amazon S3 bucket. It is required if url
is not specified.
bucket
- the name of the Aamzon S3 bucket for Anchore’s data storage.
create_bucket
- if set to true
, Anchore will attempt to create the bucket if it doesn’t exist. It is, however, recommended to pre-create the bucket.
Example
Here is a full configuration example for the S3 driver using IAM role authentication:
services:
catalog:
archive:
storage_driver:
name: 's3'
config:
# AWS IAM role authentication
iamauto: true
# Amazon S3 bucket configuration
region: 'us-east-1'
bucket: 'my-anchore-data'
create_bucket: false
# Optional compression
compression:
enabled: true
min_size_kbytes: 1
4 - S3-Compatible
Anchore Enterprise can be configured to use third-party S3 API-compatible object storage systems.
Anchore strongly recommends using Kubernetes secrets rather than plaintext entries in your values.yaml to store your S3-compatible access keys.
Example Configuration
object_store:
compression:
enabled: False
min_size_kbytes: 100
storage_driver:
name: 's3'
config:
access_key: 'MY_ACCESS_KEY'
secret_key: 'MY_SECRET_KEY'
#iamauto: True
url: 'https://my-s3-compatible-endpoint.example.com:optional_port'
region: False
bucket: "anchorearchive"
create_bucket: True
Configuration Options
The following additional configuration parameters can be used.
Compression
The S3 driver supports compression of documents. The documents are JSON formatted and will see significant reduction in
size through compression there is an overhead incurred by running compression and decompression on every access of these
documents. Anchore Enterprise can be configured to only compress documents above a certain size to reduce unnecessary
overhead. In the example below any document over 100kb in size will be compressed.
Authentication
Anchore Enterprise can authenticate against the S3-compatible service using access keys.
Endpoints
url
- (required) A URL to set to reach an S3-API compatible service. Note that if the URL is configured, the region
config value is ignored, as this is only used for Amazon S3.
Buckets
bucket
- (required) The name of the S3 bucket that Anchore will use for storing data.
create_bucket
- (default: false) Try to create the bucket if it doesn’t already exist. This should be used very sparingly. For most cases, you should pre-create the bucket so that it has the permissions you desire, then set this to false
.
Storing Object Store API keys in a Kubernetes Secret
You can configure your object store API keys to be pulled from a Kubernetes Secret as follows:
extraEnv:
- name: ANCHORE_OBJ_STORAGE_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: accessKey
- name: ANCHORE_OBJ_STORAGE_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretKey
anchoreConfig:
catalog:
object_store:
storage_driver:
name: s3
config:
access_key: ${ANCHORE_OBJ_STORAGE_ACCESS_KEY}
secret_key: ${ANCHORE_OBJ_STORAGE_SECRET_KEY}
In this example the secret was called minio-secret but you can use whatever name you would like. The secret looks as follows:
apiVersion: v1
data:
accessKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
secretKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
kind: Secret
5 - Migrating Data to New Drivers
To migrate data from one driver to another (e.g. DB to S3), Anchore Enterprise includes capabilities that automate the process in the anchore-manager
tool packaged with the system. For Helm-based deployments, this is further automated via Helm upgrade helpers, whereas for Docker Compose deployments the tool must be run manually.
Caution
The migration process is an offline process; Anchore Enterprise is not designed to handle an online migration.The object storage migration process migrates any data stored in the source config to the destination configuration, if the analysis archive is configured to use the same storage backend as the primary object store then that data is migrated along with all other data, but if the source or destination configurations define different storage backends for the analysis archive than that which is used by the primary object store, then additional paramters are necesary to indicate which configurations to migrate to/from.
The most common migration patterns are:
- Migrate from a single backend configuration to a split configuration to keep the Active Data Set in the DB and then move the Archive Data Set (analysis archive data) to an external system (db -> db + s3)
- Migrate from a dual-backend configuration to a single-backend configuration with a different config (e.g. db + s3-compatible -> s3-compatible)
At a high-level the process is:
- Shutdown all Anchore Enterprise services and components. The system should be fully offline, but the database must be online and available. For a docker compose install, this is achieved by simply stopping the engine container, but not deleting it.
- Prepare a new
config.yaml
that includes the new driver configuration for the destination of the migration (dest-config.yaml
) in the same location as the existing config.yaml
- Test a new
dest-config.yaml
to ensure correct configuration - Run the migration
- Get coffee… this could take a while if you have a lot of analysis data
- When complete, view the results
- Ensure the
dest-config.yaml
is in place for all the components as config.yaml
- Start Anchore Enterprise services and components.
EXAMPLE: Migration of Object Store in Helm-based Deployment from DB to Amazon S3
The Anchore Enterprise Helm Chart provides a way to run the migration steps listed in this page automatically by spinning up a job and crafting the configs required and running the necessary migration commands. Further information is available via instructions found in our Helm Chart here. Below are example configurations:
Note
Anchore recommends using IAM roles for access to Amazon S3, as in the example below, you can find role configuration details here.# example config
osaaMigrationJob:
enabled: true # note that we are enabling the migration job
analysisArchiveMigration:
run: true # we are specifying to run the analysis_archive migration
bucket: "analysis_archive"
mode: to_analysis_archive
# the deployment will be migrated to use the following configs for catalog.analysis_archive
analysis_archive:
enabled: true
compression:
enabled: true
min_size_kbytes: 100
storage_driver:
name: s3
config:
iamauto: true
region: <MY_AWS_REGION>
bucket: analysisarchive
objectStoreMigration:
run: true
# note that since this is the same as anchoreConfig.catalog.object_store, the migration
# command for migrating the object store will still run, but it will not do anything as there
# is nothing to be done
object_store:
verify_content_digests: true
compression:
enabled: false
min_size_kbytes: 100
storage_driver:
name: db
config: {}
# the deployment was previously deployed using the following configs
anchoreConfig:
default_admin_password: foobar
catalog:
analysis_archive:
enabled: true
compression:
enabled: true
min_size_kbytes: 100
storage_driver:
name: db
config: {}
object_store:
verify_content_digests: true
compression:
enabled: true
min_size_kbytes: 100
storage_driver:
name: db
config: {}
EXAMPLE: Migration of Object Store in Helm-based Deployment from DB to S3-compatible
# example config
osaaMigrationJob:
enabled: true # note that we are enabling the migration job
analysisArchiveMigration:
run: true # we are specifying to run the analysis_archive migration
bucket: "analysis_archive"
mode: to_analysis_archive
# the deployment will be migrated to use the following configs for catalog.analysis_archive
analysis_archive:
enabled: true
compression:
enabled: true
min_size_kbytes: 100
storage_driver:
name: s3
config:
access_key: MY_ACCESS_KEY
secret_key: MY_SECRET_KEY
url: 'https://my-s3-compatible-endpoint.example.com:optional_port'
region: null
bucket: analysisarchive
objectStoreMigration:
run: true
# note that since this is the same as anchoreConfig.catalog.object_store, the migration
# command for migrating the object store will still run, but it will not do anything as there
# is nothing to be done
object_store:
verify_content_digests: true
compression:
enabled: false
min_size_kbytes: 100
storage_driver:
name: db
config: {}
# the deployment was previously deployed using the following configs
anchoreConfig:
default_admin_password: foobar
catalog:
analysis_archive:
enabled: true
compression:
enabled: true
min_size_kbytes: 100
storage_driver:
name: db
config: {}
object_store:
verify_content_digests: true
compression:
enabled: true
min_size_kbytes: 100
storage_driver:
name: db
config: {}
EXAMPLE: Migration of Object Store in Docker Compose from DB to S3-compatible
The following example demonstrates migration for a Docker Compose deployment.
Preparing for Migration
For the migration process you will need:
- The original
config.yaml
used by the services already, if services are split out or using different config.yaml
for different services, you need the config.yaml
used by the catalog services - An updated
config.yaml
(named dest-config.yaml
in this example), with the archive driver section of the catalog service config set to the config you want to migrate to - The db connection string from
config.yaml
, this is needed by the anchore-manager
script directly - Credentials and resources (bucket etc) for the destination of the migration
If Anchore Enterprise is deployed using Docker Compose, the migration must be manually initiated using the anchore-manager
script. The following is an example migration for Anchore Enterprise deployed via Docker Compose on a single host with a local postgresql container. This process requires that you run the command in a location that has access to both the source archive driver configuration and the new archive driver configuration.
Step 1: Shutdown all services
All services should be stopped, but the postgresql db must still be available and running. You can use the docker compose stop
command and supply all services names except the DB:
docker compose stop anchore-analyzer anchore-api anchore-catalog anchore-policy-engine anchore-queue anchore-enterprise-api-gateway anchore-enterprise-rbac-service redis
Step 2: Prepare a new config.yaml
Both the original and new configurations are needed, so create a copy and update the archive driver section to the configuration you want to migrate to
cd config
cp config.yaml dest-config.yaml
<edit dest-config.yaml>
Step 3: Test the destination config
Assuming that config is dest-config.yaml
:
$ docker compose run anchore-catalog /bin/bash
[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} check /config/dest-config.yaml
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Using config file /config/dest-config.yaml
[MainThread] [anchore_engine.subsys.object_store.operations/initialize()] [INFO] Archive initialization complete
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Checking existence of test document with user_id = test, bucket = anchorecliconfigtest and archive_id = cliconfigtest
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Creating test document with user_id = test, bucket = anchorecliconfigtest and archive_id = cliconfigtest
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Checking document fetch
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Removing test object
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Archive config check completed successfully
Step 3a: Test the current config.yaml
If you are running the migration for a different location than one of the Anchore Enterprise containers, same as above but using /config/config.yaml
as the input to check (skipped in this instance since we’re running the migration from the same container)
Step 4: Run the migration
By default, the migration process will remove data from the source once it has confirmed it has been copied to the destination and the metadata has been updated in the anchore db. To skip the deletion on the source, use the --nodelete
option. It is the safest option, but if you use it, you are responsible for removing the data later.
[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} migrate /config/config.yaml /config/dest-config.yaml
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Loading configs
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration from config: {
"storage_driver": {
"config": {},
"name": "db"
},
"compression": {
"enabled": false,
"min_size_kbytes": 100
}
}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration to config: {
"storage_driver": {
"config": {
"access_key": "9EB92C7W61YPFQ6QLDOU",
"create_bucket": true,
"url": "http://minio-ephemeral-test:9000/",
"region": false,
"bucket": "anchore-engine-testing",
"prefix": "internaltest",
"secret_key": "TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s"
},
"name": "s3"
},
"compression": {
"enabled": true,
"min_size_kbytes": 100
}
}
Performing this operation requires *all* anchore-engine services to be stopped - proceed? (y/N)y
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Initializing migration from {'storage_driver': {'config': {}, 'name': 'db'}, 'compression': {'enabled': False, 'min_size_kbytes': 100}} to {'storage_driver': {'config': {'access_key': '9EB92C7W61YPFQ6QLDOU', 'create_bucket': True, 'url': 'http://minio-ephemeral-test:9000/', 'region': False, 'bucket': 'anchore-engine-testing', 'prefix': 'internaltest', 'secret_key': 'TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s'}, 'name': 's3'}, 'compression': {'enabled': True, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/migration_context()] [INFO] Initializing source object_store: {'storage_driver': {'config': {}, 'name': 'db'}, 'compression': {'enabled': False, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/migration_context()] [INFO] Initializing dest object_store: {'storage_driver': {'config': {'access_key': '9EB92C7W61YPFQ6QLDOU', 'create_bucket': True, 'url': 'http://minio-ephemeral-test:9000/', 'region': False, 'bucket': 'anchore-engine-testing', 'prefix': 'internaltest', 'secret_key': 'TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s'}, 'name': 's3'}, 'compression': {'enabled': True, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration Task Id: 1
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Entering main migration loop
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migrating 7 documents
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/policy_bundles/2c53a13c-1765-11e8-82ef-23527761d060
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/manifest_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/analysis_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/image_content_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/manifest_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/analysis_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/image_content_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration result summary: {"last_state": "running", "executor_id": "3209ad44d7bb:37:139731996518208:", "archive_documents_migrated": 7, "last_updated": "2018-08-15T18:03:52.951364", "online_migration": null, "created_at": "2018-08-15T18:03:52.951354", "migrate_from_driver": "db", "archive_documents_to_migrate": 7, "state": "complete", "migrate_to_driver": "s3", "ended_at": "2018-08-15T18:03:53.720554", "started_at": "2018-08-15T18:03:52.949956", "type": "archivemigrationtask", "id": 1}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] After this migration, your anchore-engine config.yaml MUST have the following configuration options added before starting up again:
compression:
enabled: true
min_size_kbytes: 100
storage_driver:
config:
access_key: 9EB92C7W61YPFQ6QLDOU
bucket: anchore-engine-testing
create_bucket: true
prefix: internaltest
region: false
secret_key: TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s
url: http://minio-ephemeral-test:9000/
name: s3
Note
If something goes wrong you can reverse the parameters of the migrate command to migrate back to the original configuration (e.g. migrate /config/dest-config.yaml /config/config.yaml)Step 5: Get coffee!
The migration time will depend on the amount of data and the source and destination systems performance.
Step 6: View migration results summary
[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} list-migrations
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
id state start time end time from to migrated count total to migrate last updated
1 complete 2018-08-15T18:03:52.949956 2018-08-15T18:03:53.720554 db s3 7 7 2018-08-15T18:03:53.724628
This lists all migrations for the service and the number of objects migrated. If you’ve run multiple migrations you’ll see multiple rows in this response.
Step 7: Replace old config.yaml with updated dest-config.yaml
You should now permanently move into place the new configuration, replacing the old.
[root@3209ad44d7bb ~]# cp /config/config.yaml /config/config.old.yaml
[root@3209ad44d7bb ~]# cp /config/dest-config.yaml /config/config.yaml
Step 8: Restart Anchore Enterprise services
Run the following command at the same location as your docker-compose file to bring all services back up:
docker compose start
The system should now be up and running using the new configuration! You can verify with the anchorectl
command by fetching a policy, which will have been migrated:
$ anchorectl policy list
✔ Fetched policies
┌─────────────────────────┬──────────────────────────────────────┬────────┬──────────────────────┐
│ NAME │ POLICY ID │ ACTIVE │ UPDATED │
├─────────────────────────┼──────────────────────────────────────┼────────┼──────────────────────┤
│ Default bundle │ 2c53a13c-1765-11e8-82ef-23527761d060 │ true │ 2022-07-14T22:52:27Z │
│ anchore_security_only │ anchore_security_only │ false │ 2022-07-14T22:52:27Z │
│ anchore_cis_1.13.0_base │ anchore_cis_1.13.0_base │ false │ 2022-07-14T22:52:27Z │
└─────────────────────────┴──────────────────────────────────────┴────────┴──────────────────────┘
$ anchorectl -o json-raw policy get 2c53a13c-1765-11e8-82ef-23527761d060
[
{
"blacklisted_images": [],
"comment": "Default bundle",
"id": "2c53a13c-1765-11e8-82ef-23527761d060",
... <lots of json>
If that returns the content properly, then you’re all done!