Migrating Data to New Drivers

To migrate data from one driver to another (e.g. DB to S3), Anchore Enterprise includes capabilities that automate the process in the anchore-manager tool packaged with the system. For Helm-based deployments, this is further automated via Helm upgrade helpers, whereas for Docker Compose deployments the tool must be run manually.

The object storage migration process migrates any data stored in the source config to the destination configuration, if the analysis archive is configured to use the same storage backend as the primary object store then that data is migrated along with all other data, but if the source or destination configurations define different storage backends for the analysis archive than that which is used by the primary object store, then additional paramters are necesary to indicate which configurations to migrate to/from.

The most common migration patterns are:

  • Migrate from a single backend configuration to a split configuration to keep the Active Data Set in the DB and then move the Archive Data Set (analysis archive data) to an external system (db -> db + s3)
  • Migrate from a dual-backend configuration to a single-backend configuration with a different config (e.g. db + s3-compatible -> s3-compatible)

At a high-level the process is:

  1. Shutdown all Anchore Enterprise services and components. The system should be fully offline, but the database must be online and available. For a docker compose install, this is achieved by simply stopping the engine container, but not deleting it.
  2. Prepare a new config.yaml that includes the new driver configuration for the destination of the migration (dest-config.yaml) in the same location as the existing config.yaml
  3. Test a new dest-config.yaml to ensure correct configuration
  4. Run the migration
  5. Get coffee… this could take a while if you have a lot of analysis data
  6. When complete, view the results
  7. Ensure the dest-config.yaml is in place for all the components as config.yaml
  8. Start Anchore Enterprise services and components.

EXAMPLE: Migration of Object Store in Helm-based Deployment from DB to Amazon S3

The Anchore Enterprise Helm Chart provides a way to run the migration steps listed in this page automatically by spinning up a job and crafting the configs required and running the necessary migration commands. Further information is available via instructions found in our Helm Chart here. Below are example configurations:

# example config
osaaMigrationJob:
  enabled: true # note that we are enabling the migration job
  analysisArchiveMigration:
    run: true # we are specifying to run the analysis_archive migration
    bucket: "analysis_archive"
    mode: to_analysis_archive
    # the deployment will be migrated to use the following configs for catalog.analysis_archive
    analysis_archive:
      enabled: true
      compression:
        enabled: true
        min_size_kbytes: 100
      storage_driver:
        name: s3
        config:
          iamauto: true
          region: <MY_AWS_REGION>
          bucket: analysisarchive
  objectStoreMigration:
    run: true
    # note that since this is the same as anchoreConfig.catalog.object_store, the migration
    # command for migrating the object store will still run, but it will not do anything as there
    # is nothing to be done
    object_store:
      verify_content_digests: true
      compression:
        enabled: false
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}

# the deployment was previously deployed using the following configs
anchoreConfig:
  default_admin_password: foobar
  catalog:
    analysis_archive:
      enabled: true
      compression:
        enabled: true
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}
    object_store:
      verify_content_digests: true
      compression:
        enabled: true
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}

EXAMPLE: Migration of Object Store in Helm-based Deployment from DB to S3-compatible

# example config
osaaMigrationJob:
  enabled: true # note that we are enabling the migration job
  analysisArchiveMigration:
    run: true # we are specifying to run the analysis_archive migration
    bucket: "analysis_archive"
    mode: to_analysis_archive
    # the deployment will be migrated to use the following configs for catalog.analysis_archive
    analysis_archive:
      enabled: true
      compression:
        enabled: true
        min_size_kbytes: 100
      storage_driver:
        name: s3
        config:
          access_key: MY_ACCESS_KEY
          secret_key: MY_SECRET_KEY
          url: 'https://my-s3-compatible-endpoint.example.com:optional_port'
          region: null
          bucket: analysisarchive
  objectStoreMigration:
    run: true
    # note that since this is the same as anchoreConfig.catalog.object_store, the migration
    # command for migrating the object store will still run, but it will not do anything as there
    # is nothing to be done
    object_store:
      verify_content_digests: true
      compression:
        enabled: false
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}

# the deployment was previously deployed using the following configs
anchoreConfig:
  default_admin_password: foobar
  catalog:
    analysis_archive:
      enabled: true
      compression:
        enabled: true
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}
    object_store:
      verify_content_digests: true
      compression:
        enabled: true
        min_size_kbytes: 100
      storage_driver:
        name: db
        config: {}

EXAMPLE: Migration of Object Store in Docker Compose from DB to S3-compatible

The following example demonstrates migration for a Docker Compose deployment.

Preparing for Migration

For the migration process you will need:

  1. The original config.yaml used by the services already, if services are split out or using different config.yaml for different services, you need the config.yaml used by the catalog services
  2. An updated config.yaml (named dest-config.yaml in this example), with the archive driver section of the catalog service config set to the config you want to migrate to
  3. The db connection string from config.yaml, this is needed by the anchore-manager script directly
  4. Credentials and resources (bucket etc) for the destination of the migration

If Anchore Enterprise is deployed using Docker Compose, the migration must be manually initiated using the anchore-manager script. The following is an example migration for Anchore Enterprise deployed via Docker Compose on a single host with a local postgresql container. This process requires that you run the command in a location that has access to both the source archive driver configuration and the new archive driver configuration.

Step 1: Shutdown all services

All services should be stopped, but the postgresql db must still be available and running. You can use the docker compose stop command and supply all services names except the DB:

docker compose stop anchore-analyzer anchore-api anchore-catalog anchore-policy-engine anchore-queue anchore-enterprise-api-gateway anchore-enterprise-rbac-service redis

Step 2: Prepare a new config.yaml

Both the original and new configurations are needed, so create a copy and update the archive driver section to the configuration you want to migrate to

cd config
cp config.yaml dest-config.yaml
<edit dest-config.yaml>

Step 3: Test the destination config

Assuming that config is dest-config.yaml:

$ docker compose run anchore-catalog /bin/bash
[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} check /config/dest-config.yaml 
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Using config file /config/dest-config.yaml
[MainThread] [anchore_engine.subsys.object_store.operations/initialize()] [INFO] Archive initialization complete
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Checking existence of test document with user_id = test, bucket = anchorecliconfigtest and archive_id = cliconfigtest
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Creating test document with user_id = test, bucket = anchorecliconfigtest and archive_id = cliconfigtest
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Checking document fetch
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Removing test object
[MainThread] [anchore_manager.cli.objectstorage/check()] [INFO] Archive config check completed successfully

Step 3a: Test the current config.yaml

If you are running the migration for a different location than one of the Anchore Enterprise containers, same as above but using /config/config.yaml as the input to check (skipped in this instance since we’re running the migration from the same container)

Step 4: Run the migration

By default, the migration process will remove data from the source once it has confirmed it has been copied to the destination and the metadata has been updated in the anchore db. To skip the deletion on the source, use the --nodelete option. It is the safest option, but if you use it, you are responsible for removing the data later.

[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} migrate /config/config.yaml /config/dest-config.yaml 
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Loading configs
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration from config: {
  "storage_driver": {
    "config": {}, 
    "name": "db"
  }, 
  "compression": {
    "enabled": false, 
    "min_size_kbytes": 100
  }
}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] Migration to config: {
  "storage_driver": {
    "config": {
      "access_key": "9EB92C7W61YPFQ6QLDOU", 
      "create_bucket": true, 
      "url": "http://minio-ephemeral-test:9000/", 
      "region": false, 
      "bucket": "anchore-engine-testing", 
      "prefix": "internaltest", 
      "secret_key": "TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s"
    }, 
    "name": "s3"
  }, 
  "compression": {
    "enabled": true, 
    "min_size_kbytes": 100
  }
}
Performing this operation requires *all* anchore-engine services to be stopped - proceed? (y/N)y
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Initializing migration from {'storage_driver': {'config': {}, 'name': 'db'}, 'compression': {'enabled': False, 'min_size_kbytes': 100}} to {'storage_driver': {'config': {'access_key': '9EB92C7W61YPFQ6QLDOU', 'create_bucket': True, 'url': 'http://minio-ephemeral-test:9000/', 'region': False, 'bucket': 'anchore-engine-testing', 'prefix': 'internaltest', 'secret_key': 'TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s'}, 'name': 's3'}, 'compression': {'enabled': True, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/migration_context()] [INFO] Initializing source object_store: {'storage_driver': {'config': {}, 'name': 'db'}, 'compression': {'enabled': False, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/migration_context()] [INFO] Initializing dest object_store: {'storage_driver': {'config': {'access_key': '9EB92C7W61YPFQ6QLDOU', 'create_bucket': True, 'url': 'http://minio-ephemeral-test:9000/', 'region': False, 'bucket': 'anchore-engine-testing', 'prefix': 'internaltest', 'secret_key': 'TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s'}, 'name': 's3'}, 'compression': {'enabled': True, 'min_size_kbytes': 100}}
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration Task Id: 1
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Entering main migration loop
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migrating 7 documents
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/policy_bundles/2c53a13c-1765-11e8-82ef-23527761d060
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/manifest_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/analysis_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/image_content_data/sha256:0873c923e00e0fd2ba78041bfb64a105e1ecb7678916d1f7776311e45bf5634b
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/manifest_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/analysis_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Deleting document on source after successful migration to destination. Src = db://admin/image_content_data/sha256:a0cd2c88c5cc65499e959ac33c8ebab45f24e6348b48d8c34fd2308fcb0cc138
[MainThread] [anchore_engine.subsys.object_store.migration/initiate_migration()] [INFO] Migration result summary: {"last_state": "running", "executor_id": "3209ad44d7bb:37:139731996518208:", "archive_documents_migrated": 7, "last_updated": "2018-08-15T18:03:52.951364", "online_migration": null, "created_at": "2018-08-15T18:03:52.951354", "migrate_from_driver": "db", "archive_documents_to_migrate": 7, "state": "complete", "migrate_to_driver": "s3", "ended_at": "2018-08-15T18:03:53.720554", "started_at": "2018-08-15T18:03:52.949956", "type": "archivemigrationtask", "id": 1}
[MainThread] [anchore_manager.cli.objectstorage/migrate()] [INFO] After this migration, your anchore-engine config.yaml MUST have the following configuration options added before starting up again:
compression:
  enabled: true
  min_size_kbytes: 100
storage_driver:
  config:
    access_key: 9EB92C7W61YPFQ6QLDOU
    bucket: anchore-engine-testing
    create_bucket: true
    prefix: internaltest
    region: false
    secret_key: TuHo2UbBx+amD3YiCeidy+R3q82MPTPiyd+dlW+s
    url: http://minio-ephemeral-test:9000/
  name: s3

Step 5: Get coffee!

The migration time will depend on the amount of data and the source and destination systems performance.

Step 6: View migration results summary

[root@3209ad44d7bb ~]# anchore-manager objectstorage --db-connect ${db} list-migrations
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB params: {"db_pool_size": 30, "db_connect": "postgresql+pg8000://postgres:postgres.dev@postgres-dev:5432/postgres", "db_connect_args": {"ssl": false, "connect_timeout": 120, "timeout": 30}, "db_pool_max_overflow": 100}
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connection configured: True
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB attempting to connect...
[MainThread] [anchore_manager.cli.utils/connect_database()] [INFO] DB connected: True
id         state                  start time                         end time                 from        to        migrated count        total to migrate               last updated               
1         complete        2018-08-15T18:03:52.949956        2018-08-15T18:03:53.720554         db         s3              7                      7                2018-08-15T18:03:53.724628   

This lists all migrations for the service and the number of objects migrated. If you’ve run multiple migrations you’ll see multiple rows in this response.

Step 7: Replace old config.yaml with updated dest-config.yaml

You should now permanently move into place the new configuration, replacing the old.

[root@3209ad44d7bb ~]# cp /config/config.yaml /config/config.old.yaml
[root@3209ad44d7bb ~]# cp /config/dest-config.yaml /config/config.yaml

Step 8: Restart Anchore Enterprise services

Run the following command at the same location as your docker-compose file to bring all services back up:

docker compose start

The system should now be up and running using the new configuration! You can verify with the anchorectl command by fetching a policy, which will have been migrated:

$ anchorectl policy list
 ✔ Fetched policies
┌─────────────────────────┬──────────────────────────────────────┬────────┬──────────────────────┐
│ NAME                    │ POLICY ID                            │ ACTIVE │ UPDATED              │
├─────────────────────────┼──────────────────────────────────────┼────────┼──────────────────────┤
│ Default bundle          │ 2c53a13c-1765-11e8-82ef-23527761d060 │ true   │ 2022-07-14T22:52:27Z │
│ anchore_security_only   │ anchore_security_only                │ false  │ 2022-07-14T22:52:27Z │
│ anchore_cis_1.13.0_base │ anchore_cis_1.13.0_base              │ false  │ 2022-07-14T22:52:27Z │
└─────────────────────────┴──────────────────────────────────────┴────────┴──────────────────────┘

$ anchorectl -o json-raw policy get 2c53a13c-1765-11e8-82ef-23527761d060 
[ 
  {
    "blacklisted_images": [], 
    "comment": "Default bundle", 
    "id": "2c53a13c-1765-11e8-82ef-23527761d060", 
... <lots of json>

If that returns the content properly, then you’re all done!

Last modified June 10, 2025