This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Data Stream

Overview

The Anchore Data Stream provides a mechanism to stream security data from Anchore Enterprise to external systems for further processing, analysis, and long-term storage. As image vulnerability scans and policy evaluations occur within Anchore Enterprise, the data is captured and written to files. These files are monitored by a sidecar service (such as Fluent Bit). The sidecar service reads the data from the files and forwards the events to external destinations like Splunk, Elasticsearch, or other SIEM platforms.

This feature enables you to integrate:

  • Real-time Security Monitoring: Stream vulnerability discoveries and policy violations as they occur
  • Centralized Log Management: Aggregate Anchore security data with other infrastructure logs
  • Custom Dashboards: Build security dashboards in your preferred analytics platform
  • Compliance Reporting: Maintain audit trails of security events for compliance requirements
  • Alerting Integration: Trigger alerts based on critical vulnerability discoveries or policy failures

Architecture

The data streaming pipeline consists of three components:

Anchore Enterprise (Reports Worker) → Data Event Files → Fluent Bit Sidecar → External Destination
  1. Reports Worker: Security data to NDJSON (newline-delimited JSON) files
  2. Data Event Files: Rotating log files stored on a shared volume, with automatic cleanup of processed files
  3. Fluent Bit: A lightweight log forwarder that tails the data event files and forwards them to your destination

Data Event Types

The following system data events are streamed:

Data Event TypeDescription
Image Vulnerability Scan ResultsChanges to the vulnerability scan results including CVE IDs, severity, fix availability, and affected packages
Image Policy Evaluation FindingsChanges to the policy evaluation results including pass/fail status, triggered gates, and findings

Getting Started

To set up the Data Event Stream integration:

  1. Configure the Data Stream in Anchore Enterprise
  2. Deploy Fluent Bit as a sidecar container
  3. Configure your destination (e.g., Splunk)

1 - Event Stream Configuration

Overview

The Data Event Stream feature is configured through the Anchore Enterprise configuration file or Helm values. This page covers the configuration options for enabling event streaming and customizing its behavior.

Prerequisites

Before enabling the event stream:

  1. Ensure you have a valid license with the Data Stream entitlement
  2. Plan your shared volume strategy for the data event files. Maximum file size and count will impact storage requirements.
  3. Determine your destination system (Splunk, Elasticsearch, etc.)

Configuration

Ports

ComponentPortPurpose
Fluent Bit Health2020Health checks and metrics endpoint
(example) Splunk HEC8088HTTP Event Collector ingestion

Shard Data Files

PathDescription
/var/log/anchore/events/Default directory for data event files
/var/log/anchore/events/events.json.*Rotating data event files (timestamped)
/var/log/anchore/events/offsets.dbFluent Bit file position tracking database

Helm Values (Kubernetes)

To enable event streaming in a Kubernetes deployment using Helm, add the following to your values.yaml:

anchoreConfig:
  reports_worker:
    event_stream:
      enabled: true
      max_file_size_mb: 100
      max_file_count: 5
    cycle_timers:
      event_stream_health: 60

Config File (Docker Compose / Standalone)

For Docker Compose or standalone deployments, add the following to your config.yaml:

services:
  reports_worker:
    event_stream:
      enabled: true
      max_file_size_mb: 100
      max_file_count: 5
    cycle_timers:
      event_stream_health: 60

Volume Configuration

The Reports Worker and the Fluent Bit sidecar must have read/write access to the data event directory.

Kubernetes

Create a shared volume between the Reports Worker and Fluent Bit:

# In your Helm values or deployment manifest
volumes:
  - name: anchore-events
    emptyDir: {}

# Reports Worker container
volumeMounts:
  - name: anchore-events
    mountPath: /var/log/anchore

# Fluent Bit container
volumeMounts:
  - name: anchore-events
    mountPath: /var/log/anchore

Docker Compose

Use a named volume shared between containers:

volumes:
  anchore-events:

services:
  reports-worker:
    volumes:
      - anchore-events:/var/log/anchore

  fluent-bit:
    volumes:
      - anchore-events:/var/log/anchore:ro

File Rotation

Data event files are rotated based on the max_file_size_mb setting. When a file reaches the maximum size, a new file is created with a timestamp suffix:

events.json.20240115T103045Z
events.json.20240115T114532Z
events.json.20240115T125018Z

The max_file_count setting determines how many files are retained. Older files are deleted after they have been processed by Fluent Bit (tracked via the position database).

Health Monitoring

The data stream health watcher runs at the interval specified by event_stream_health and performs the following tasks:

  1. Cleanup: Removes event files that have been fully processed by Fluent Bit
  2. Emitter Resume Detection: Detects when the data_stream has been suspended and allows it to resumes processing when possible

Viewing Integration Status

The system event notification system provides events related to the data stream health. You can view these events via the API or UI. Filter on event type system.event_stream.suspend and system.event_stream.resume to see suspension and resumption events.

Verification

After enabling event streaming, verify it is working:

Step 1: Analyze an Image

Analyze a new image to generate vulnerability and policy events:

Step 2: Check Data Event Files

You should see one or more event files with the pattern events.json.*.

Examples below:

# Kubernetes
kubectl exec -it <reports-worker-pod> -- ls -la /var/log/anchore/events/
Defaulted container "enterprise-reportsworker" out of: enterprise-reportsworker, fluent-bit (init)
total 51680
drwxrwsrwx. 2 root    anchore      104 Jan 10 18:17 .
drwxrwxr-x. 1 anchore root         113 Jan 10 15:15 ..
-rw-r--r--. 1 anchore anchore 35440931 Jan 10 18:41 events.json.20260110T181643Z
-rw-r--r--. 1 anchore anchore     8192 Jan 10 18:02 offsets.db
-rw-r--r--. 1 anchore anchore    32768 Jan 10 18:41 offsets.db-shm
-rw-r--r--. 1 anchore anchore  4120032 Jan 10 18:41 offsets.db-wal


# Docker
docker exec <reports-worker-container> ls -la /var/log/anchore/events/
total 5092
drwxr-xr-x 2 root    root    4096 Jan 10 18:29 .
drwxrwxr-x 3 anchore root    4096 Jan 10 15:21 ..
-rw-r--r-- 1 root    root 5163541 Jan 10 15:26 events.json.20260110T152612Z
-rw-r--r-- 1 root    root    8192 Jan 10 17:53 offsets.db
-rw-r--r-- 1 root    root   32768 Jan 10 18:29 offsets.db-shm
-rw-r--r-- 1 root    root       0 Jan 10 18:29 offsets.db-wal
****

Troubleshooting

No Event Files Created

  1. Verify enabled: true is set in the configuration
  2. Check that the Reports Worker has write permissions to the directory
  3. Ensure the event_stream_health cycle timer is configured
  4. Check Reports Worker logs for errors

Events Not Being Processed

  1. Verify Fluent Bit is running and can read the event files
  2. Check the position database (offsets.db) exists and is being updated
  3. Review Fluent Bit logs for connection or parsing errors

Data Stream is Suspended

If the data stream becomes suspended due to unprocessed files accumulating, consider:

  1. Increase max_file_size_mb to buffer more date and allow fluent bit to catch up
  2. Increase max_file_count to retain more files during high-volume periods
  3. Ensure Fluent Bit is keeping up with event production

Next Steps

2 - Fluent Bit Integration

Overview

Fluent Bit is a lightweight, high-performance log processor and forwarder that serves as the bridge between Anchore Enterprise event files and your destination system. This guide covers deploying Fluent Bit as a sidecar container to forward events to external systems.

Prerequisites

  • Data event streaming enabled in Anchore Enterprise
  • Shared volume configured between Reports Worker and Fluent Bit
  • Network access from Fluent Bit to your destination system

Architecture

Fluent Bit runs as a sidecar container alongside the Reports Worker, sharing a volume for event files:

┌──────────────────────────────────────────────────────────────────────┐
│                      Kubernetes Pod                                  │
│  ┌─────────────────┐                   ┌─────────────────────────┐   │
│  │  Reports Worker │                   │      Fluent Bit         │   │
│  │                 │                   │                         │   │
│  │   Data Event    │                   │  Tail Input Plugin      │   │
│  │     Emitter     │                   │         │               │   │
│  └────────┬────────┘                   │         ▼               │   │
│           │                            │  JSON Parser            │   │
│           │ writes                     │         │               │   │
│           ▼                            │         ▼               │   │
│  ┌──────────────────────────┐          │  Output Plugin ─────────┼───┼──► Splunk/Elastic/etc
│  │ /var/log/anchore/events/ │◄─────────┤  (HTTP/HEC)             │   │
│  │                          │  reads   │                         │   │
│  └──────────────────────────┘          └─────────────────────────┘   │
│       Shared Volume                                                  │
└──────────────────────────────────────────────────────────────────────┘

Deployment

Kubernetes (Helm)

Add a Fluent Bit sidecar to your Anchore Enterprise deployment by modifying your Helm values:

    reportsWorker:
      extraVolumes:
        - name: anchore-events
          emptyDir: {}
        - name: fluent-bit-config
          configMap:
            name: fluent-bit-config
            defaultMode: 0644
        - name: fluent-bit-lua-helpers
          configMap:
            name: fluent-bit-lua-helpers
            defaultMode: 0644
      extraVolumeMounts:
        - name: anchore-events
          mountPath: /var/log/anchore/events
      initContainers:
        - name: fluent-bit
          image: fluent/fluent-bit:latest
          imagePullPolicy: IfNotPresent
          restartPolicy: Always
          ports:
            - containerPort: 2020
              name: metrics
              protocol: TCP
          volumeMounts:
            - name: fluent-bit-config
              mountPath: /fluent-bit/etc/fluent-bit.conf
              subPath: fluent-bit.conf
              readOnly: true
            - name: fluent-bit-config
              mountPath: /fluent-bit/etc/parsers.conf
              subPath: parsers.conf
              readOnly: true
            - name: fluent-bit-lua-helpers
              mountPath: /fluent-bit/etc/anchore_helpers.lua
              subPath: anchore_helpers.lua
            - name: anchore-events
              mountPath: /var/log/anchore/events

Create a ConfigMap for Fluent Bit configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush           1
        Daemon          Off
        Log_Level       info
        Parsers_File    parsers.conf
        HTTP_Server     On
        HTTP_Listen     0.0.0.0
        HTTP_Port       2020

    [INPUT]
        Name            tail
        Path            /var/log/anchore/events/events.json.*
        Tag             anchore.events
        Parser          json
        DB              /var/log/anchore/events/offsets.db
        Mem_Buf_Limit   64MB
        Buffer_Chunk_Size 32MB
        Buffer_Max_Size 64MB
        Skip_Long_Lines Off
        Refresh_Interval 10
        Rotate_Wait     5
        Read_from_Head  On

    [FILTER]
        Name            modify
        Match           anchore.events
        Add             anchore_service reports_worker

    [OUTPUT]
        Name            splunk
        Match           anchore.events
        Host            ${SPLUNK_HEC_HOST}
        Port            ${SPLUNK_HEC_PORT}
        TLS             On
        TLS.Verify      On
        Splunk_Token    ${SPLUNK_HEC_TOKEN}
        Splunk_Send_Raw Off
        Event_Host      anchore-enterprise
        Event_Sourcetype anchore:events
        Retry_Limit     5

  parsers.conf: |
    [PARSER]
        Name        json
        Format      json
        Time_Key    timestamp
        Time_Format %Y-%m-%dT%H:%M:%S.%LZ
        Time_Keep   On

Docker Compose

Add Fluent Bit to your Docker Compose configuration:

services:
  fluent-bit:
    image: fluent/fluent-bit:latest
    restart: unless-stopped
    volumes:
      - ./fluent-bit/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf:ro
      - ./fluent-bit/parsers.conf:/fluent-bit/etc/parsers.conf:ro
      - anchore-events:/var/log/anchore:rw
    environment:
      SPLUNK_HEC_HOST: "${SPLUNK_HEC_HOST:-splunk}"
      SPLUNK_HEC_PORT: "${SPLUNK_HEC_PORT:-8088}"
      SPLUNK_HEC_TOKEN: "${SPLUNK_HEC_TOKEN}"
    ports:
      - "2020:2020"
    depends_on:
      - reports-worker
    networks:
      - anchore-network

volumes:
  anchore-events:

Create the configuration files in a fluent-bit/ directory:

fluent-bit/fluent-bit.conf:

[SERVICE]
    Flush           1
    Daemon          Off
    Log_Level       info
    Parsers_File    parsers.conf
    HTTP_Server     On
    HTTP_Listen     0.0.0.0
    HTTP_Port       2020

[INPUT]
    Name            tail
    Path            /var/log/anchore/events/events.json.*
    Tag             anchore.events
    Parser          json
    DB              /var/log/anchore/events/offsets.db
    Mem_Buf_Limit   64MB
    Buffer_Chunk_Size 32MB
    Buffer_Max_Size 64MB
    Skip_Long_Lines Off
    Refresh_Interval 10
    Rotate_Wait     5
    Read_from_Head  On

[FILTER]
    Name            modify
    Match           anchore.events
    Add             anchore_service reports_worker

[OUTPUT]
    Name            splunk
    Match           anchore.events
    Host            ${SPLUNK_HEC_HOST}
    Port            ${SPLUNK_HEC_PORT}
    TLS             On
    TLS.Verify      On
    Splunk_Token    ${SPLUNK_HEC_TOKEN}
    Splunk_Send_Raw Off
    Event_Host      anchore-enterprise
    Event_Sourcetype anchore:events
    Retry_Limit     5

fluent-bit/parsers.conf:

[PARSER]
    Name        json
    Format      json
    Time_Key    timestamp
    Time_Format %Y-%m-%dT%H:%M:%S.%LZ
    Time_Keep   On

Configuration Reference

Input Configuration

The tail input plugin monitors event files and tracks read positions:

ParameterValueDescription
NametailUse the tail input plugin
Path/var/log/anchore/events/events.json.*Pattern matching event files
Taganchore.eventsTag for routing to outputs
ParserjsonParse each line as JSON
DB/var/log/anchore/events/offsets.dbSQLite database for position tracking
Mem_Buf_Limit64MBMemory buffer limit
Buffer_Chunk_Size32MBBuffer chunk size for reading
Buffer_Max_Size64MBMaximum buffer size per file
Read_from_HeadOnRead from beginning for new files
Refresh_Interval10Seconds between file checks
Rotate_Wait5Seconds to wait before processing rotated files

Buffer Sizing

Vulnerability reports can be large (10-100+ KB per event). The buffer settings should accommodate your largest expected events:

Event TypeTypical SizeRecommended Buffer
Vulnerability Report (few CVEs)10-50 KB32 MB
Vulnerability Report (many CVEs)100-500 KB64 MB
Vulnerability Report (large image)500 KB - 2 MB128 MB
Policy Evaluation5-20 KB32 MB

For large images with many vulnerabilities, increase the buffer settings:

[INPUT]
    ...
    Mem_Buf_Limit   128MB
    Buffer_Chunk_Size 64MB
    Buffer_Max_Size 128MB

Position Tracking

Fluent Bit uses an SQLite database to track which events have been read and forwarded. This ensures:

  • Events are not re-sent after Fluent Bit restarts
  • Each file is tracked independently by inode
  • Progress is persistent across container restarts

The position database is stored at the path specified by DB and should be on the same volume as the event files.

Output Plugins

Fluent Bit supports multiple output destinations. Common options include:

Splunk

See the Splunk Integration guide for detailed configuration.

[OUTPUT]
    Name            splunk
    Match           anchore.events
    Host            ${SPLUNK_HEC_HOST}
    Port            ${SPLUNK_HEC_PORT}
    TLS             On
    TLS.Verify      On
    Splunk_Token    ${SPLUNK_HEC_TOKEN}

Elasticsearch

[OUTPUT]
    Name            es
    Match           anchore.events
    Host            elasticsearch.example.com
    Port            9200
    Index           anchore-events
    Type            _doc
    TLS             On
    TLS.Verify      On
    HTTP_User       ${ES_USER}
    HTTP_Passwd     ${ES_PASSWORD}

HTTP (Generic Webhook)

[OUTPUT]
    Name            http
    Match           anchore.events
    Host            webhook.example.com
    Port            443
    URI             /api/events
    Format          json
    TLS             On
    TLS.Verify      On
    Header          Authorization Bearer ${API_TOKEN}

Stdout (Debugging)

For troubleshooting, add stdout output to see events in container logs:

[OUTPUT]
    Name            stdout
    Match           anchore.events
    Format          json_lines

Filtering and Transformation

Adding Metadata

Add custom fields to all events:

[FILTER]
    Name            modify
    Match           anchore.events
    Add             environment production
    Add             cluster_name my-cluster
    Add             anchore_service reports_worker

Filtering by Event Type

Route different event types to different outputs:

[FILTER]
    Name            rewrite_tag
    Match           anchore.events
    Rule            $event ^(image\.vulnerability_report)$ vuln.$1 false
    Rule            $event ^(tag\.policy_evaluation)$ policy.$1 false

[OUTPUT]
    Name            splunk
    Match           vuln.*
    Host            ${SPLUNK_HEC_HOST}
    Splunk_Token    ${VULN_TOKEN}
    Event_Index     vulnerabilities

[OUTPUT]
    Name            splunk
    Match           policy.*
    Host            ${SPLUNK_HEC_HOST}
    Splunk_Token    ${POLICY_TOKEN}
    Event_Index     policy_evaluations

Troubleshooting

No Events Forwarded

  1. Check event files exist:

    ls -la /var/log/anchore/events/
    
  2. Verify Fluent Bit can read files:

    docker logs <fluent-bit-container> 2>&1 | grep -i "tail"
    
  3. Check position database:

    ls -la /var/log/anchore/events/offsets.db
    
  4. Enable debug logging:

    [SERVICE]
        Log_Level   debug
    

Connection Errors

  1. Verify network connectivity:

    # From inside the Fluent Bit container
    curl -k https://${SPLUNK_HEC_HOST}:${SPLUNK_HEC_PORT}/services/collector/health
    
  2. Check TLS settings: If using self-signed certificates, you may need TLS.Verify Off (not recommended for production)

  3. Verify credentials: Test HEC token directly:

    curl -k -X POST "https://${SPLUNK_HEC_HOST}:${SPLUNK_HEC_PORT}/services/collector/event" \
      -H "Authorization: Splunk ${SPLUNK_HEC_TOKEN}" \
      -d '{"event": "test"}'
    

Buffer Overflow

If you see buffer full errors:

  1. Increase buffer limits:

    Mem_Buf_Limit   128MB
    Buffer_Max_Size 128MB
    
  2. Check destination throughput - events may be produced faster than they can be forwarded

  3. Consider adding backpressure handling with storage.type filesystem

Re-sending All Events

To reset position tracking and re-send all events:

# Stop Fluent Bit
# Delete the position database
rm /var/log/anchore/events/offsets.db
# Restart Fluent Bit

Next Steps

3 - Splunk Integration

Overview

This guide covers integrating Anchore Enterprise data streaming with Splunk using the HTTP Event Collector (HEC). Once configured, vulnerability reports and policy evaluations will flow into Splunk for search, alerting, and dashboard visualization.

Prerequisites

Architecture

┌─────────────────────┐       ┌─────────────────────┐       ┌─────────────────────┐
│  Anchore Enterprise │       │     Fluent Bit      │       │       Splunk        │
│                     │       │                     │       │                     │
│  Reports Worker     │──────►│  Tail + JSON Parse  │──────►│  HTTP Event         │
│  Event Files        │ NDJSON│  Splunk Output      │ HTTPS │  Collector (HEC)    │
│                     │       │                     │       │                     │
└─────────────────────┘       └─────────────────────┘       └─────────────────────┘
                                                                     │
                                                                     ▼
                                                            ┌─────────────────────┐
                                                            │  Splunk Index       │
                                                            │  - Search           │
                                                            │  - Dashboards       │
                                                            │  - Alerts           │
                                                            └─────────────────────┘

Splunk Configuration

Step 1: Enable HTTP Event Collector

Enable HEC globally in Splunk:

Via Splunk Web UI:

  1. Navigate to Settings > Data Inputs > HTTP Event Collector
  2. Click Global Settings
  3. Set All Tokens to Enabled
  4. Configure Default Source Type to anchore:events
  5. Click Save

Via REST API:

curl -k -u admin:<password> -X POST \
  https://<splunk-host>:8089/servicesNS/nobody/splunk_httpinput/data/inputs/http/http/enable

Step 2: Create HEC Token

Create a dedicated HEC token for Anchore events:

Via Splunk Web UI:

  1. Navigate to Settings > Data Inputs > HTTP Event Collector
  2. Click New Token
  3. Configure:
    • Name: anchore_events
    • Source type: anchore:events
    • Index: main (or create a dedicated index)
  4. Click Submit
  5. Copy the generated token value

Via REST API:

curl -k -u admin:<password> -X POST \
  https://<splunk-host>:8089/servicesNS/nobody/splunk_httpinput/data/inputs/http \
  -d "name=anchore_events" \
  -d "sourcetype=anchore:events" \
  -d "index=main"

The response will include the token value.

For better data management, create a dedicated index for Anchore events:

Via Splunk Web UI:

  1. Navigate to Settings > Indexes
  2. Click New Index
  3. Configure:
    • Index Name: anchore_events
    • Max Size: Based on your retention needs
  4. Click Save
  5. Update your HEC token to use this index

Via REST API:

curl -k -u admin:<password> -X POST \
  https://<splunk-host>:8089/services/data/indexes \
  -d "name=anchore_events" \
  -d "datatype=event"

Fluent Bit Configuration

Configure Fluent Bit to forward events to Splunk HEC:

[OUTPUT]
    Name            splunk
    Match           anchore.events
    Host            ${SPLUNK_HEC_HOST}
    Port            ${SPLUNK_HEC_PORT}
    TLS             On
    TLS.Verify      On
    Splunk_Token    ${SPLUNK_HEC_TOKEN}
    Splunk_Send_Raw Off
    Event_Host      anchore-enterprise
    Event_Sourcetype anchore:events
    Event_Index     anchore_events
    Retry_Limit     5

Configuration Options

ParameterDescriptionExample
HostSplunk HEC hostnamesplunk.example.com
PortSplunk HEC port8088
TLSEnable TLS encryptionOn
TLS.VerifyVerify TLS certificatesOn
Splunk_TokenHEC authentication tokenyour-token-here
Splunk_Send_RawSend raw JSON eventsOff
Event_HostHost field value in Splunkanchore-enterprise
Event_SourcetypeSourcetype for eventsanchore:events
Event_IndexTarget Splunk indexanchore_events
Retry_LimitNumber of retry attempts5

Environment Variables

Set these environment variables for Fluent Bit:

VariableDescriptionExample
SPLUNK_HEC_HOSTSplunk HEC hostnamesplunk.example.com
SPLUNK_HEC_PORTSplunk HEC port8088
SPLUNK_HEC_TOKENHEC authentication tokenyour-hec-token

TLS Configuration

For production deployments, always enable TLS verification:

[OUTPUT]
    Name            splunk
    ...
    TLS             On
    TLS.Verify      On
    TLS.CA_File     /path/to/ca-bundle.crt

If using self-signed certificates (not recommended for production):

[OUTPUT]
    Name            splunk
    ...
    TLS             On
    TLS.Verify      Off

Verification

Step 1: Test HEC Connectivity

Test the HEC endpoint directly:

curl -k -X POST "https://<splunk-host>:8088/services/collector/event" \
  -H "Authorization: Splunk <your-token>" \
  -d '{"event": "test event from anchore"}'

Expected response:

{"text":"Success","code":0}

Step 2: Check Fluent Bit Logs

Verify Fluent Bit is connecting to Splunk:

# Kubernetes
kubectl logs <fluent-bit-pod> | grep -i splunk

# Docker
docker logs <fluent-bit-container> 2>&1 | grep -i splunk

Look for:

  • [output:splunk:splunk.0] worker #0 started
  • No connection errors

Step 3: Search for Events in Splunk

Run a search in Splunk to verify events are arriving:

index=anchore_events sourcetype="anchore:events"

Or search for specific event types:

index=anchore_events event="image.vulnerability_report"
index=anchore_events event="tag.policy_evaluation"

Event Schema

Vulnerability Report Event

{
  "event": "image.vulnerability_report",
  "timestamp": "2024-01-15T10:30:45.123Z",
  "account_name": "admin",
  "resource_id": "sha256:abc123...",
  "payload": {
    "image_digest": "sha256:abc123...",
    "total_added": 15,
    "total_removed": 3,
    "added": [
      {
        "vulnerability_id": "CVE-2024-1234",
        "severity": "Critical",
        "package_name": "openssl",
        "package_version": "1.1.1k",
        "fixed_in": "1.1.1l",
        "url": "https://nvd.nist.gov/vuln/detail/CVE-2024-1234"
      }
    ],
    "removed": []
  }
}

Policy Evaluation Event

{
  "event": "tag.policy_evaluation",
  "timestamp": "2024-01-15T10:31:00.456Z",
  "account_name": "admin",
  "resource_id": "docker.io/library/alpine:latest",
  "payload": {
    "result": "fail",
    "policy_id": "default",
    "image_digest": "sha256:abc123...",
    "findings": [
      {
        "gate": "vulnerabilities",
        "trigger": "package",
        "action": "stop",
        "message": "Critical vulnerability found: CVE-2024-1234"
      }
    ]
  }
}

Splunk Searches

Basic Searches

All Anchore Events:

index=anchore_events sourcetype="anchore:events"

Vulnerability Reports Only:

index=anchore_events event="image.vulnerability_report"

Policy Evaluations Only:

index=anchore_events event="tag.policy_evaluation"

Failed Policy Evaluations:

index=anchore_events event="tag.policy_evaluation" payload.result="fail"

Vulnerability Analysis

Critical Vulnerabilities:

index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| where severity="Critical"
| table _time, account_name, resource_id, vulnerability_id, package_name, fixed_in

Top 10 Most Common CVEs:

index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| stats count by vulnerability_id
| sort -count
| head 10

Vulnerabilities by Severity:

index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| stats count by severity
| sort -count

Images with Most Vulnerabilities:

index=anchore_events event="image.vulnerability_report"
| stats sum(payload.total_added) as total_vulns by resource_id
| sort -total_vulns
| head 10

Policy Analysis

Policy Violations by Gate:

index=anchore_events event="tag.policy_evaluation" payload.result="fail"
| spath path=payload.findings{} output=findings
| mvexpand findings
| spath input=findings
| stats count by gate
| sort -count

Recent Policy Failures:

index=anchore_events event="tag.policy_evaluation" payload.result="fail"
| table _time, account_name, resource_id, payload.policy_id
| sort -_time
| head 20

Dashboards

Creating a Vulnerability Dashboard

Create a new dashboard in Splunk with the following panels:

Panel 1: Vulnerability Count Over Time

index=anchore_events event="image.vulnerability_report"
| timechart sum(payload.total_added) as "New Vulnerabilities"

Panel 2: Severity Distribution

index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| stats count by severity

Panel 3: Top Vulnerable Images

index=anchore_events event="image.vulnerability_report"
| stats sum(payload.total_added) as vulns by resource_id
| sort -vulns
| head 10

Creating a Policy Compliance Dashboard

Panel 1: Pass/Fail Ratio

index=anchore_events event="tag.policy_evaluation"
| stats count by payload.result

Panel 2: Policy Compliance Over Time

index=anchore_events event="tag.policy_evaluation"
| timechart count by payload.result

Panel 3: Recent Failures

index=anchore_events event="tag.policy_evaluation" payload.result="fail"
| table _time, account_name, resource_id, payload.policy_id
| sort -_time

Alerting

Critical Vulnerability Alert

Create an alert for new critical vulnerabilities:

Search:

index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| where severity="Critical"
| stats count as critical_count by resource_id
| where critical_count > 0

Alert Settings:

  • Trigger: Number of results > 0
  • Throttle: 1 hour per resource_id
  • Action: Email, Slack, or PagerDuty

Policy Failure Alert

Create an alert for policy failures:

Search:

index=anchore_events event="tag.policy_evaluation" payload.result="fail"
| stats count by resource_id, payload.policy_id

Alert Settings:

  • Trigger: Number of results > 0
  • Throttle: Based on your requirements
  • Action: Your preferred notification method

Troubleshooting

No Events in Splunk

  1. Verify HEC is enabled:

    curl -k "https://<splunk-host>:8089/services/data/inputs/http?output_mode=json" \
      -u admin:<password>
    
  2. Test HEC endpoint:

    curl -k -X POST "https://<splunk-host>:8088/services/collector/event" \
      -H "Authorization: Splunk <token>" \
      -d '{"event": "test"}'
    
  3. Check Fluent Bit logs for errors:

    docker logs <fluent-bit-container> 2>&1 | tail -50
    
  4. Verify network connectivity:

    # From Fluent Bit container
    curl -k https://<splunk-host>:8088/services/collector/health
    

Authentication Errors

If you see 401 Unauthorized errors:

  1. Verify the HEC token is correct
  2. Check the token is enabled in Splunk
  3. Ensure the token has permission to write to the target index

TLS Errors

If you see certificate errors:

  1. Verify the CA certificate is correct
  2. Check certificate chain is complete
  3. For testing only: Set TLS.Verify Off (not recommended for production)

Missing Fields

If fields are not appearing in Splunk:

  1. Verify the sourcetype is set correctly
  2. Check field extractions in Splunk
  3. Use spath command to extract JSON fields in searches

Performance Tuning

High Volume Environments

For high-volume deployments:

  1. Increase Fluent Bit workers:

    [SERVICE]
        Workers     4
    
  2. Enable compression:

    [OUTPUT]
        Name            splunk
        ...
        compress        gzip
    
  3. Batch events:

    [OUTPUT]
        Name            splunk
        ...
        Batch_Size      2048
    

Splunk Indexer Optimization

  1. Create a dedicated index for Anchore events
  2. Configure appropriate retention policies
  3. Consider using indexed extractions for frequently searched fields

Next Steps