This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

AWS CodeBuild

Image scanning can be integrated into your AWS CodeBuild pipeline using anchorectl. This guide provides an end-to-end example that creates all required AWS resources (ECR, CodeCommit, S3, IAM roles, CodeBuild, and CodePipeline) from scratch. If you already have an existing CodeBuild project and CodePipeline, the key integration points are the install phase (to install anchorectl), the post_build commands, and the artifacts section of the buildspec.yml in Step 5.

Requirements

  1. Anchore Enterprise is deployed in your environment, with the API accessible from your AWS CodeBuild environment.
  2. An AWS account with permissions to create ECR repositories, CodeCommit repositories, CodeBuild projects, CodePipeline pipelines, S3 buckets, and IAM roles.
  3. The AWS CLI installed and configured with valid credentials.

1. Configure Variables

Set the following shell variables for use throughout the guide. Replace the placeholder values with your actual Anchore Enterprise deployment URL, username, and password. The ANCHORECTL_PASSWORD value should be treated as a secret to prevent exposure in logs.

export AWS_REGION=us-east-1
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

export APP_NAME=myapp
export ECR_REPO_NAME=$APP_NAME
export CODECOMMIT_REPO_NAME=$APP_NAME-repo
export CODEBUILD_PROJECT_NAME=$APP_NAME-build
export CODEPIPELINE_NAME=$APP_NAME-pipeline

export ARTIFACT_BUCKET=${APP_NAME}-codepipeline-artifacts-${AWS_ACCOUNT_ID}-${AWS_REGION}

export CODEBUILD_ROLE_NAME=${APP_NAME}-codebuild-role
export CODEPIPELINE_ROLE_NAME=${APP_NAME}-codepipeline-role

### Anchore Enterprise connection details
export ANCHORECTL_URL=http://anchore-enterprise-api.example.com:8228
export ANCHORECTL_USERNAME=admin
export ANCHORECTL_PASSWORD=foobar

2. Create AWS Resources

a) ECR Repository

Create an ECR repository to store your container images. Image tag immutability is enabled to ensure each build produces a unique, traceable image tag derived from the git commit hash.

aws ecr create-repository \
  --region "$AWS_REGION" \
  --repository-name "$ECR_REPO_NAME" \
  --image-tag-mutability IMMUTABLE \
  --image-scanning-configuration scanOnPush=true

b) CodeCommit Repository

Create a CodeCommit repository to host your application source code.

aws codecommit create-repository \
  --region "$AWS_REGION" \
  --repository-name "$CODECOMMIT_REPO_NAME" \
  --repository-description "Source repo for $APP_NAME"

c) S3 Artifact Bucket

Create an S3 bucket for CodePipeline to store build artifacts (including Anchore scan results). Versioning, encryption, and public access blocking are enabled for security best practices.

### us-east-1 does not support LocationConstraint
if [ "$AWS_REGION" = "us-east-1" ]; then
  aws s3api create-bucket \
    --bucket "$ARTIFACT_BUCKET" \
    --region "$AWS_REGION"
else
  aws s3api create-bucket \
    --bucket "$ARTIFACT_BUCKET" \
    --region "$AWS_REGION" \
    --create-bucket-configuration LocationConstraint="$AWS_REGION"
fi

### Enable versioning
aws s3api put-bucket-versioning \
  --bucket "$ARTIFACT_BUCKET" \
  --versioning-configuration Status=Enabled

### Enable server-side encryption
aws s3api put-bucket-encryption \
  --bucket "$ARTIFACT_BUCKET" \
  --server-side-encryption-configuration '{
    "Rules": [
      {
        "ApplyServerSideEncryptionByDefault": {
          "SSEAlgorithm": "aws:kms"
        },
        "BucketKeyEnabled": true
      }
    ]
  }'

### Block all public access
aws s3api put-public-access-block \
  --bucket "$ARTIFACT_BUCKET" \
  --public-access-block-configuration \
    BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true

3. Create IAM Roles

a) CodeBuild Role

The CodeBuild role needs permissions for CloudWatch Logs, S3 artifact access, CodeCommit source pulls, and ECR push/pull operations.

cat > codebuild-trust-policy.json <<'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": { "Service": "codebuild.amazonaws.com" },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

cat > codebuild-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Logs",
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "*"
    },
    {
      "Sid": "ArtifactsBucket",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::$ARTIFACT_BUCKET",
        "arn:aws:s3:::$ARTIFACT_BUCKET/*"
      ]
    },
    {
      "Sid": "CodeCommitSource",
      "Effect": "Allow",
      "Action": [
        "codecommit:GitPull"
      ],
      "Resource": "arn:aws:codecommit:$AWS_REGION:$AWS_ACCOUNT_ID:$CODECOMMIT_REPO_NAME"
    },
    {
      "Sid": "ECRAuth",
      "Effect": "Allow",
      "Action": [
        "ecr:GetAuthorizationToken"
      ],
      "Resource": "*"
    },
    {
      "Sid": "ECRPushPull",
      "Effect": "Allow",
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:CompleteLayerUpload",
        "ecr:GetDownloadUrlForLayer",
        "ecr:InitiateLayerUpload",
        "ecr:PutImage",
        "ecr:UploadLayerPart",
        "ecr:BatchGetImage"
      ],
      "Resource": "arn:aws:ecr:$AWS_REGION:$AWS_ACCOUNT_ID:repository/$ECR_REPO_NAME"
    }
  ]
}
EOF

aws iam create-role \
  --role-name "$CODEBUILD_ROLE_NAME" \
  --assume-role-policy-document file://codebuild-trust-policy.json

aws iam put-role-policy \
  --role-name "$CODEBUILD_ROLE_NAME" \
  --policy-name "${APP_NAME}-codebuild-inline" \
  --policy-document file://codebuild-policy.json

export CODEBUILD_ROLE_ARN=$(aws iam get-role \
  --role-name "$CODEBUILD_ROLE_NAME" \
  --query 'Role.Arn' \
  --output text)

b) CodePipeline Role

The CodePipeline role needs permissions for S3 artifact access, CodeCommit source operations, and CodeBuild build triggers.

cat > codepipeline-trust-policy.json <<'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": { "Service": "codepipeline.amazonaws.com" },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

cat > codepipeline-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "S3Artifacts",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:GetBucketVersioning",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::$ARTIFACT_BUCKET",
        "arn:aws:s3:::$ARTIFACT_BUCKET/*"
      ]
    },
    {
      "Sid": "CodeCommitSource",
      "Effect": "Allow",
      "Action": [
        "codecommit:GetBranch",
        "codecommit:GetCommit",
        "codecommit:UploadArchive",
        "codecommit:GetUploadArchiveStatus",
        "codecommit:CancelUploadArchive"
      ],
      "Resource": "arn:aws:codecommit:$AWS_REGION:$AWS_ACCOUNT_ID:$CODECOMMIT_REPO_NAME"
    },
    {
      "Sid": "CodeBuildStart",
      "Effect": "Allow",
      "Action": [
        "codebuild:BatchGetBuilds",
        "codebuild:StartBuild"
      ],
      "Resource": "arn:aws:codebuild:$AWS_REGION:$AWS_ACCOUNT_ID:project/$CODEBUILD_PROJECT_NAME"
    }
  ]
}
EOF

aws iam create-role \
  --role-name "$CODEPIPELINE_ROLE_NAME" \
  --assume-role-policy-document file://codepipeline-trust-policy.json

aws iam put-role-policy \
  --role-name "$CODEPIPELINE_ROLE_NAME" \
  --policy-name "${APP_NAME}-codepipeline-inline" \
  --policy-document file://codepipeline-policy.json

export CODEPIPELINE_ROLE_ARN=$(aws iam get-role \
  --role-name "$CODEPIPELINE_ROLE_NAME" \
  --query 'Role.Arn' \
  --output text)

4. Create the CodeBuild Project

The CodeBuild project defines the build environment and passes Anchore Enterprise credentials as environment variables. The privilegedMode setting is required for Docker-in-Docker builds.

Note: The ANCHORECTL_PASSWORD is included as a PLAINTEXT environment variable here for simplicity. For production use, store it in AWS Secrets Manager or SSM Parameter Store and reference it with type: SECRETS_MANAGER or type: PARAMETER_STORE in the environmentVariables block.

cat > create-project.json <<EOF
{
  "name": "$CODEBUILD_PROJECT_NAME",
  "serviceRole": "$CODEBUILD_ROLE_ARN",
  "source": {
    "type": "CODEPIPELINE",
    "buildspec": "buildspec.yml"
  },
  "artifacts": {
    "type": "CODEPIPELINE"
  },
  "environment": {
    "type": "LINUX_CONTAINER",
    "image": "aws/codebuild/standard:7.0",
    "computeType": "BUILD_GENERAL1_MEDIUM",
    "privilegedMode": true,
    "environmentVariables": [
      {
        "name": "AWS_REGION",
        "value": "$AWS_REGION",
        "type": "PLAINTEXT"
      },
      {
        "name": "IMAGE_REPO_NAME",
        "value": "$ECR_REPO_NAME",
        "type": "PLAINTEXT"
      },
      {
        "name": "ANCHORECTL_URL",
        "value": "$ANCHORECTL_URL",
        "type": "PLAINTEXT"
      },
      {
        "name": "ANCHORECTL_USERNAME",
        "value": "$ANCHORECTL_USERNAME",
        "type": "PLAINTEXT"
      },
      {
        "name": "ANCHORECTL_PASSWORD",
        "value": "$ANCHORECTL_PASSWORD",
        "type": "PLAINTEXT"
      }
    ]
  },
  "timeoutInMinutes": 60
}
EOF

aws codebuild create-project \
  --region "$AWS_REGION" \
  --cli-input-json file://create-project.json

5. Configure Scanning Mode

a) Distributed Mode

This is the most easily scalable method for scanning images. Distributed scanning uses the anchorectl utility to build the SBOM directly on the CodeBuild runner and then pushes the SBOM to Anchore Enterprise through the API. This avoids the need to provide registry credentials in the Enterprise backend, since the image is loaded directly from the local Docker daemon.

Clone the CodeCommit repository and create a buildspec.yml with the following content. The buildspec installs anchorectl directly from your Anchore Enterprise deployment (ensuring version compatibility), builds and tags the Docker image using the git commit hash, scans the image with Anchore Enterprise, and exports all scan artifacts.

git clone "$(aws codecommit get-repository \
  --region "$AWS_REGION" \
  --repository-name "$CODECOMMIT_REPO_NAME" \
  --query 'repositoryMetadata.cloneUrlHttp' \
  --output text)"

cd "$CODECOMMIT_REPO_NAME"
git checkout -b main

cat > buildspec.yml <<'EOF'
version: 0.2

phases:
  install:
    commands:
      ### install anchorectl from your Anchore Enterprise deployment to ensure version compatibility
      - curl -sSfL -u "${ANCHORECTL_USERNAME}:${ANCHORECTL_PASSWORD}" "${ANCHORECTL_URL}/v2/system/anchorectl?operating_system=linux&architecture=amd64" | tar -zx -C /usr/local/bin anchorectl
  pre_build:
    commands:
      - AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
      ### use the short git commit hash as the image tag for traceability
      - IMAGE_TAG=$(echo "$CODEBUILD_RESOLVED_SOURCE_VERSION" | cut -c1-7)
      - IMAGE_URI=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${IMAGE_REPO_NAME}:${IMAGE_TAG}
      - aws ecr get-login-password --region "$AWS_REGION" | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
  build:
    commands:
      - |
        docker build -t ${IMAGE_REPO_NAME}:${IMAGE_TAG} \
          --build-arg CI=aws-codepipeline \
          --build-arg REPO=${IMAGE_REPO_NAME} \
          --build-arg COMMIT=${CODEBUILD_RESOLVED_SOURCE_VERSION} \
          --build-arg COMMIT_SHORT=${IMAGE_TAG} \
          --build-arg PIPELINE=${CODEBUILD_INITIATOR} \
          --build-arg REGION=${AWS_REGION} \
          .
      - docker tag ${IMAGE_REPO_NAME}:${IMAGE_TAG} ${IMAGE_URI}
  post_build:
    commands:
      ### scan the image from the local Docker daemon (distributed mode) and wait for analysis to complete
      ### --get all=./ exports all scan artifacts (SBOMs, vulnerabilities, policy evaluation) to the working directory
      - |
        anchorectl image add ${IMAGE_URI} --dockerfile Dockerfile --from docker --dockerfile Dockerfile --no-auto-subscribe --wait --get all=./ \
          --annotation "ci=aws-codepipeline" \
          --annotation "repo=${IMAGE_REPO_NAME}" \
          --annotation "commit=${CODEBUILD_RESOLVED_SOURCE_VERSION}" \
          --annotation "commit_short=${IMAGE_TAG}" \
          --annotation "pipeline=${CODEBUILD_INITIATOR}" \
          --annotation "region=${AWS_REGION}"
      ### evaluate the image against your Anchore Enterprise policy
      ### set --fail-based-on-results to break the build if the policy evaluation returns FAIL
      - anchorectl image check ${IMAGE_URI} --fail-based-on-results --detail
      - docker push ${IMAGE_URI}
      - printf '[{"name":"app","imageUri":"%s"}]' "${IMAGE_URI}" > imagedefinitions.json
artifacts:
  files:
    - imagedefinitions.json
    - content.json
    - image-metadata.json
    - policy-evaluation.json
    - sbom.json
    - sbomcyclonedx.json
    - sbomspdx.json
    - vulnerability.json
EOF

cat > Dockerfile <<'EOF'
FROM public.ecr.aws/ubuntu/ubuntu:latest

ARG CI
ARG REPO
ARG COMMIT
ARG COMMIT_SHORT
ARG PIPELINE
ARG REGION

LABEL org.opencontainers.image.source="${REPO}" \
      org.opencontainers.image.revision="${COMMIT}" \
      com.anchore.ci="${CI}" \
      com.anchore.commit.short="${COMMIT_SHORT}" \
      com.anchore.pipeline="${PIPELINE}" \
      com.anchore.region="${REGION}"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update \
    && apt-get install -y --no-install-recommends \
       python3 \
       python3-pip \
    && rm -rf /var/lib/apt/lists/*

CMD ["python3", "--version"]
EOF

git add .
git commit -m "Initial commit"
git push origin main

cd ..

The --get all=./ flag on anchorectl image add exports the following scan artifacts to the build directory, which are then stored as pipeline artifacts in S3:

ArtifactDescription
sbom.jsonAnchore-native SBOM format
sbomcyclonedx.jsonCycloneDX SBOM (industry standard)
sbomspdx.jsonSPDX SBOM (industry standard)
vulnerability.jsonFull vulnerability report
policy-evaluation.jsonPolicy evaluation results
content.jsonPackage and file content listing
image-metadata.jsonImage metadata (digest, distro, layers)

b) Centralized Mode

This method uses the “analyzer” pods in the Anchore Enterprise deployment to build the SBOM. This can create queuing if there are not enough analyzer processes, and this method will require the operator to provide ECR registry credentials in the Enterprise backend. This method may be preferred in cases where the Anchore Enterprise operator does not control the image build process (the analyzers can simply poll registries to look for new image builds as they are pushed), and this method also allows the operator to simply queue up the image for asynchronous scanning later if vulnerability and policy results are not required immediately. If the user wants malware scanning results from Anchore Enterprise’s clamav integration, the Centralized Scanning method is required.

To use centralized mode, replace the post_build commands in the buildspec above with the following. Note that --from docker is removed, so Anchore Enterprise will pull the image from the registry after it is pushed.

  post_build:
    commands:
      ### push the image first so Anchore Enterprise can pull it from the registry
      - docker push ${IMAGE_URI}
      ### queue the image for scanning by Anchore Enterprise analyzers
      ### --no-auto-subscribe prevents automatic re-scanning on future tag updates
      - |
        anchorectl image add ${IMAGE_URI} --no-auto-subscribe --wait --get all=./ \
          --annotation "ci=aws-codepipeline" \
          --annotation "repo=${IMAGE_REPO_NAME}" \
          --annotation "commit=${CODEBUILD_RESOLVED_SOURCE_VERSION}" \
          --annotation "commit_short=${IMAGE_TAG}" \
          --annotation "pipeline=${CODEBUILD_INITIATOR}" \
          --annotation "region=${AWS_REGION}"
      ### evaluate the image against your Anchore Enterprise policy
      - anchorectl image check ${IMAGE_URI} --fail-based-on-results --detail
      - printf '[{"name":"app","imageUri":"%s"}]' "${IMAGE_URI}" > imagedefinitions.json

6. Create the CodePipeline

The pipeline has two stages: a Source stage that pulls from CodeCommit on each commit to the main branch, and a Build stage that runs the CodeBuild project.

cat > pipeline.json <<EOF
{
  "pipeline": {
    "name": "$CODEPIPELINE_NAME",
    "roleArn": "$CODEPIPELINE_ROLE_ARN",
    "artifactStore": {
      "type": "S3",
      "location": "$ARTIFACT_BUCKET"
    },
    "stages": [
      {
        "name": "Source",
        "actions": [
          {
            "name": "Source",
            "actionTypeId": {
              "category": "Source",
              "owner": "AWS",
              "provider": "CodeCommit",
              "version": "1"
            },
            "runOrder": 1,
            "configuration": {
              "RepositoryName": "$CODECOMMIT_REPO_NAME",
              "BranchName": "main",
              "PollForSourceChanges": "false"
            },
            "outputArtifacts": [
              {
                "name": "SourceOutput"
              }
            ],
            "inputArtifacts": []
          }
        ]
      },
      {
        "name": "Build",
        "actions": [
          {
            "name": "Build",
            "actionTypeId": {
              "category": "Build",
              "owner": "AWS",
              "provider": "CodeBuild",
              "version": "1"
            },
            "runOrder": 1,
            "configuration": {
              "ProjectName": "$CODEBUILD_PROJECT_NAME"
            },
            "inputArtifacts": [
              {
                "name": "SourceOutput"
              }
            ],
            "outputArtifacts": [
              {
                "name": "BuildOutput"
              }
            ]
          }
        ]
      }
    ],
    "version": 1
  }
}
EOF

aws codepipeline create-pipeline \
  --region "$AWS_REGION" \
  --cli-input-json file://pipeline.json

7. Run the Pipeline

Start the pipeline manually:

aws codepipeline start-pipeline-execution \
  --region "$AWS_REGION" \
  --name "$CODEPIPELINE_NAME"

8. View Results

When the pipeline completes, view the build results in the AWS Console under CodeBuild > Build history > select your build > Build logs. The logs will display the anchorectl output including vulnerability counts and policy evaluation results.

The scan artifacts (SBOMs, vulnerability report, policy evaluation) are stored as build artifacts in the S3 artifact bucket. You can download them from CodePipeline > select your pipeline > BuildOutput artifact, or directly from the S3 bucket.