1 - Event Stream Configuration
Overview
The Data Event Stream feature is configured through the Anchore Enterprise configuration file or Helm values. This page covers the configuration options for enabling event streaming and customizing its behavior.
Prerequisites
Before enabling the event stream:
- Ensure you have a valid license with the Data Stream entitlement
- Plan your shared volume strategy for the data event files. Maximum file size and count will impact storage requirements.
- Determine your destination system (Splunk, Elasticsearch, etc.)
Configuration
Ports
| Component | Port | Purpose |
|---|
| Fluent Bit Health | 2020 | Health checks and metrics endpoint |
| (example) Splunk HEC | 8088 | HTTP Event Collector ingestion |
Shard Data Files
| Path | Description |
|---|
/var/log/anchore/events/ | Default directory for data event files |
/var/log/anchore/events/events.json.* | Rotating data event files (timestamped) |
/var/log/anchore/events/offsets.db | Fluent Bit file position tracking database |
Helm Values (Kubernetes)
To enable event streaming in a Kubernetes deployment using Helm, add the following to your values.yaml:
anchoreConfig:
reports_worker:
event_stream:
enabled: true
max_file_size_mb: 100
max_file_count: 5
cycle_timers:
event_stream_health: 60
Config File (Docker Compose / Standalone)
For Docker Compose or standalone deployments, add the following to your config.yaml:
services:
reports_worker:
event_stream:
enabled: true
max_file_size_mb: 100
max_file_count: 5
cycle_timers:
event_stream_health: 60
Important
The event_stream_health cycle timer must be configured for proper operation. Without it processed files will not be cleaned up.Volume Configuration
The Reports Worker and the Fluent Bit sidecar must have read/write access to the data event directory.
Kubernetes
Create a shared volume between the Reports Worker and Fluent Bit:
# In your Helm values or deployment manifest
volumes:
- name: anchore-events
emptyDir: {}
# Reports Worker container
volumeMounts:
- name: anchore-events
mountPath: /var/log/anchore
# Fluent Bit container
volumeMounts:
- name: anchore-events
mountPath: /var/log/anchore
Docker Compose
Use a named volume shared between containers:
volumes:
anchore-events:
services:
reports-worker:
volumes:
- anchore-events:/var/log/anchore
fluent-bit:
volumes:
- anchore-events:/var/log/anchore:ro
File Rotation
Data event files are rotated based on the max_file_size_mb setting. When a file reaches the maximum size, a new file
is created with a timestamp suffix:
events.json.20240115T103045Z
events.json.20240115T114532Z
events.json.20240115T125018Z
The max_file_count setting determines how many files are retained. Older files are deleted after they have been
processed by Fluent Bit (tracked via the position database).
Health Monitoring
The data stream health watcher runs at the interval specified by event_stream_health and performs the following tasks:
- Cleanup: Removes event files that have been fully processed by Fluent Bit
- Emitter Resume Detection: Detects when the data_stream has been suspended and allows it to resumes processing when possible
Viewing Integration Status
The system event notification system provides events related to the data stream health. You can view these events via the API or UI.
Filter on event type system.event_stream.suspend and system.event_stream.resume to see suspension and resumption events.
Verification
After enabling event streaming, verify it is working:
Step 1: Analyze an Image
Analyze a new image to generate vulnerability and policy events:
Step 2: Check Data Event Files
You should see one or more event files with the pattern events.json.*.
Examples below:
# Kubernetes
kubectl exec -it <reports-worker-pod> -- ls -la /var/log/anchore/events/
Defaulted container "enterprise-reportsworker" out of: enterprise-reportsworker, fluent-bit (init)
total 51680
drwxrwsrwx. 2 root anchore 104 Jan 10 18:17 .
drwxrwxr-x. 1 anchore root 113 Jan 10 15:15 ..
-rw-r--r--. 1 anchore anchore 35440931 Jan 10 18:41 events.json.20260110T181643Z
-rw-r--r--. 1 anchore anchore 8192 Jan 10 18:02 offsets.db
-rw-r--r--. 1 anchore anchore 32768 Jan 10 18:41 offsets.db-shm
-rw-r--r--. 1 anchore anchore 4120032 Jan 10 18:41 offsets.db-wal
# Docker
docker exec <reports-worker-container> ls -la /var/log/anchore/events/
total 5092
drwxr-xr-x 2 root root 4096 Jan 10 18:29 .
drwxrwxr-x 3 anchore root 4096 Jan 10 15:21 ..
-rw-r--r-- 1 root root 5163541 Jan 10 15:26 events.json.20260110T152612Z
-rw-r--r-- 1 root root 8192 Jan 10 17:53 offsets.db
-rw-r--r-- 1 root root 32768 Jan 10 18:29 offsets.db-shm
-rw-r--r-- 1 root root 0 Jan 10 18:29 offsets.db-wal
****
Troubleshooting
No Event Files Created
- Verify
enabled: true is set in the configuration - Check that the Reports Worker has write permissions to the directory
- Ensure the
event_stream_health cycle timer is configured - Check Reports Worker logs for errors
Events Not Being Processed
- Verify Fluent Bit is running and can read the event files
- Check the position database (
offsets.db) exists and is being updated - Review Fluent Bit logs for connection or parsing errors
Data Stream is Suspended
If the data stream becomes suspended due to unprocessed files accumulating, consider:
- Increase
max_file_size_mb to buffer more date and allow fluent bit to catch up - Increase
max_file_count to retain more files during high-volume periods - Ensure Fluent Bit is keeping up with event production
Next Steps
2 - Fluent Bit Integration
Overview
Fluent Bit is a lightweight, high-performance log processor and forwarder that serves as the bridge between Anchore
Enterprise event files and your destination system. This guide covers deploying Fluent Bit as a sidecar container to
forward events to external systems.
Prerequisites
- Data event streaming enabled in Anchore Enterprise
- Shared volume configured between Reports Worker and Fluent Bit
- Network access from Fluent Bit to your destination system
Architecture
Fluent Bit runs as a sidecar container alongside the Reports Worker, sharing a volume for event files:
┌──────────────────────────────────────────────────────────────────────┐
│ Kubernetes Pod │
│ ┌─────────────────┐ ┌─────────────────────────┐ │
│ │ Reports Worker │ │ Fluent Bit │ │
│ │ │ │ │ │
│ │ Data Event │ │ Tail Input Plugin │ │
│ │ Emitter │ │ │ │ │
│ └────────┬────────┘ │ ▼ │ │
│ │ │ JSON Parser │ │
│ │ writes │ │ │ │
│ ▼ │ ▼ │ │
│ ┌──────────────────────────┐ │ Output Plugin ─────────┼───┼──► Splunk/Elastic/etc
│ │ /var/log/anchore/events/ │◄─────────┤ (HTTP/HEC) │ │
│ │ │ reads │ │ │
│ └──────────────────────────┘ └─────────────────────────┘ │
│ Shared Volume │
└──────────────────────────────────────────────────────────────────────┘
Deployment
Kubernetes (Helm)
Add a Fluent Bit sidecar to your Anchore Enterprise deployment by modifying your Helm values:
reportsWorker:
extraVolumes:
- name: anchore-events
emptyDir: {}
- name: fluent-bit-config
configMap:
name: fluent-bit-config
defaultMode: 0644
- name: fluent-bit-lua-helpers
configMap:
name: fluent-bit-lua-helpers
defaultMode: 0644
extraVolumeMounts:
- name: anchore-events
mountPath: /var/log/anchore/events
initContainers:
- name: fluent-bit
image: fluent/fluent-bit:latest
imagePullPolicy: IfNotPresent
restartPolicy: Always
ports:
- containerPort: 2020
name: metrics
protocol: TCP
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc/fluent-bit.conf
subPath: fluent-bit.conf
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc/parsers.conf
subPath: parsers.conf
readOnly: true
- name: fluent-bit-lua-helpers
mountPath: /fluent-bit/etc/anchore_helpers.lua
subPath: anchore_helpers.lua
- name: anchore-events
mountPath: /var/log/anchore/events
Create a ConfigMap for Fluent Bit configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
data:
fluent-bit.conf: |
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
[INPUT]
Name tail
Path /var/log/anchore/events/events.json.*
Tag anchore.events
Parser json
DB /var/log/anchore/events/offsets.db
Mem_Buf_Limit 64MB
Buffer_Chunk_Size 32MB
Buffer_Max_Size 64MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 5
Read_from_Head On
[FILTER]
Name modify
Match anchore.events
Add anchore_service reports_worker
[OUTPUT]
Name splunk
Match anchore.events
Host ${SPLUNK_HEC_HOST}
Port ${SPLUNK_HEC_PORT}
TLS On
TLS.Verify On
Splunk_Token ${SPLUNK_HEC_TOKEN}
Splunk_Send_Raw Off
Event_Host anchore-enterprise
Event_Sourcetype anchore:events
Retry_Limit 5
parsers.conf: |
[PARSER]
Name json
Format json
Time_Key timestamp
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
Time_Keep On
Docker Compose
Add Fluent Bit to your Docker Compose configuration:
services:
fluent-bit:
image: fluent/fluent-bit:latest
restart: unless-stopped
volumes:
- ./fluent-bit/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf:ro
- ./fluent-bit/parsers.conf:/fluent-bit/etc/parsers.conf:ro
- anchore-events:/var/log/anchore:rw
environment:
SPLUNK_HEC_HOST: "${SPLUNK_HEC_HOST:-splunk}"
SPLUNK_HEC_PORT: "${SPLUNK_HEC_PORT:-8088}"
SPLUNK_HEC_TOKEN: "${SPLUNK_HEC_TOKEN}"
ports:
- "2020:2020"
depends_on:
- reports-worker
networks:
- anchore-network
volumes:
anchore-events:
Create the configuration files in a fluent-bit/ directory:
fluent-bit/fluent-bit.conf:
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
[INPUT]
Name tail
Path /var/log/anchore/events/events.json.*
Tag anchore.events
Parser json
DB /var/log/anchore/events/offsets.db
Mem_Buf_Limit 64MB
Buffer_Chunk_Size 32MB
Buffer_Max_Size 64MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 5
Read_from_Head On
[FILTER]
Name modify
Match anchore.events
Add anchore_service reports_worker
[OUTPUT]
Name splunk
Match anchore.events
Host ${SPLUNK_HEC_HOST}
Port ${SPLUNK_HEC_PORT}
TLS On
TLS.Verify On
Splunk_Token ${SPLUNK_HEC_TOKEN}
Splunk_Send_Raw Off
Event_Host anchore-enterprise
Event_Sourcetype anchore:events
Retry_Limit 5
fluent-bit/parsers.conf:
[PARSER]
Name json
Format json
Time_Key timestamp
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
Time_Keep On
Configuration Reference
The tail input plugin monitors event files and tracks read positions:
| Parameter | Value | Description |
|---|
Name | tail | Use the tail input plugin |
Path | /var/log/anchore/events/events.json.* | Pattern matching event files |
Tag | anchore.events | Tag for routing to outputs |
Parser | json | Parse each line as JSON |
DB | /var/log/anchore/events/offsets.db | SQLite database for position tracking |
Mem_Buf_Limit | 64MB | Memory buffer limit |
Buffer_Chunk_Size | 32MB | Buffer chunk size for reading |
Buffer_Max_Size | 64MB | Maximum buffer size per file |
Read_from_Head | On | Read from beginning for new files |
Refresh_Interval | 10 | Seconds between file checks |
Rotate_Wait | 5 | Seconds to wait before processing rotated files |
Buffer Sizing
Vulnerability reports can be large (10-100+ KB per event). The buffer settings should accommodate your largest expected events:
| Event Type | Typical Size | Recommended Buffer |
|---|
| Vulnerability Report (few CVEs) | 10-50 KB | 32 MB |
| Vulnerability Report (many CVEs) | 100-500 KB | 64 MB |
| Vulnerability Report (large image) | 500 KB - 2 MB | 128 MB |
| Policy Evaluation | 5-20 KB | 32 MB |
For large images with many vulnerabilities, increase the buffer settings:
[INPUT]
...
Mem_Buf_Limit 128MB
Buffer_Chunk_Size 64MB
Buffer_Max_Size 128MB
Position Tracking
Fluent Bit uses an SQLite database to track which events have been read and forwarded. This ensures:
- Events are not re-sent after Fluent Bit restarts
- Each file is tracked independently by inode
- Progress is persistent across container restarts
The position database is stored at the path specified by DB and should be on the same volume as the event files.
Output Plugins
Fluent Bit supports multiple output destinations. Common options include:
Splunk
See the Splunk Integration guide for detailed configuration.
[OUTPUT]
Name splunk
Match anchore.events
Host ${SPLUNK_HEC_HOST}
Port ${SPLUNK_HEC_PORT}
TLS On
TLS.Verify On
Splunk_Token ${SPLUNK_HEC_TOKEN}
Elasticsearch
[OUTPUT]
Name es
Match anchore.events
Host elasticsearch.example.com
Port 9200
Index anchore-events
Type _doc
TLS On
TLS.Verify On
HTTP_User ${ES_USER}
HTTP_Passwd ${ES_PASSWORD}
HTTP (Generic Webhook)
[OUTPUT]
Name http
Match anchore.events
Host webhook.example.com
Port 443
URI /api/events
Format json
TLS On
TLS.Verify On
Header Authorization Bearer ${API_TOKEN}
Stdout (Debugging)
For troubleshooting, add stdout output to see events in container logs:
[OUTPUT]
Name stdout
Match anchore.events
Format json_lines
Add custom fields to all events:
[FILTER]
Name modify
Match anchore.events
Add environment production
Add cluster_name my-cluster
Add anchore_service reports_worker
Filtering by Event Type
Route different event types to different outputs:
[FILTER]
Name rewrite_tag
Match anchore.events
Rule $event ^(image\.vulnerability_report)$ vuln.$1 false
Rule $event ^(tag\.policy_evaluation)$ policy.$1 false
[OUTPUT]
Name splunk
Match vuln.*
Host ${SPLUNK_HEC_HOST}
Splunk_Token ${VULN_TOKEN}
Event_Index vulnerabilities
[OUTPUT]
Name splunk
Match policy.*
Host ${SPLUNK_HEC_HOST}
Splunk_Token ${POLICY_TOKEN}
Event_Index policy_evaluations
Troubleshooting
No Events Forwarded
Check event files exist:
ls -la /var/log/anchore/events/
Verify Fluent Bit can read files:
docker logs <fluent-bit-container> 2>&1 | grep -i "tail"
Check position database:
ls -la /var/log/anchore/events/offsets.db
Enable debug logging:
[SERVICE]
Log_Level debug
Connection Errors
Verify network connectivity:
# From inside the Fluent Bit container
curl -k https://${SPLUNK_HEC_HOST}:${SPLUNK_HEC_PORT}/services/collector/health
Check TLS settings: If using self-signed certificates, you may need TLS.Verify Off (not recommended for production)
Verify credentials: Test HEC token directly:
curl -k -X POST "https://${SPLUNK_HEC_HOST}:${SPLUNK_HEC_PORT}/services/collector/event" \
-H "Authorization: Splunk ${SPLUNK_HEC_TOKEN}" \
-d '{"event": "test"}'
Buffer Overflow
If you see buffer full errors:
Increase buffer limits:
Mem_Buf_Limit 128MB
Buffer_Max_Size 128MB
Check destination throughput - events may be produced faster than they can be forwarded
Consider adding backpressure handling with storage.type filesystem
Re-sending All Events
To reset position tracking and re-send all events:
# Stop Fluent Bit
# Delete the position database
rm /var/log/anchore/events/offsets.db
# Restart Fluent Bit
Warning
Re-sending all events will create duplicate events in your destination system. Use with caution.Next Steps
3 - Splunk Integration
Overview
This guide covers integrating Anchore Enterprise data streaming with Splunk using the HTTP Event Collector (HEC). Once configured, vulnerability reports and policy evaluations will flow into Splunk for search, alerting, and dashboard visualization.
Prerequisites
Architecture
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ Anchore Enterprise │ │ Fluent Bit │ │ Splunk │
│ │ │ │ │ │
│ Reports Worker │──────►│ Tail + JSON Parse │──────►│ HTTP Event │
│ Event Files │ NDJSON│ Splunk Output │ HTTPS │ Collector (HEC) │
│ │ │ │ │ │
└─────────────────────┘ └─────────────────────┘ └─────────────────────┘
│
▼
┌─────────────────────┐
│ Splunk Index │
│ - Search │
│ - Dashboards │
│ - Alerts │
└─────────────────────┘
Splunk Configuration
Step 1: Enable HTTP Event Collector
Enable HEC globally in Splunk:
Via Splunk Web UI:
- Navigate to Settings > Data Inputs > HTTP Event Collector
- Click Global Settings
- Set All Tokens to Enabled
- Configure Default Source Type to
anchore:events - Click Save
Via REST API:
curl -k -u admin:<password> -X POST \
https://<splunk-host>:8089/servicesNS/nobody/splunk_httpinput/data/inputs/http/http/enable
Step 2: Create HEC Token
Create a dedicated HEC token for Anchore events:
Via Splunk Web UI:
- Navigate to Settings > Data Inputs > HTTP Event Collector
- Click New Token
- Configure:
- Name:
anchore_events - Source type:
anchore:events - Index:
main (or create a dedicated index)
- Click Submit
- Copy the generated token value
Via REST API:
curl -k -u admin:<password> -X POST \
https://<splunk-host>:8089/servicesNS/nobody/splunk_httpinput/data/inputs/http \
-d "name=anchore_events" \
-d "sourcetype=anchore:events" \
-d "index=main"
The response will include the token value.
Step 3: Create Dedicated Index (Recommended)
For better data management, create a dedicated index for Anchore events:
Via Splunk Web UI:
- Navigate to Settings > Indexes
- Click New Index
- Configure:
- Index Name:
anchore_events - Max Size: Based on your retention needs
- Click Save
- Update your HEC token to use this index
Via REST API:
curl -k -u admin:<password> -X POST \
https://<splunk-host>:8089/services/data/indexes \
-d "name=anchore_events" \
-d "datatype=event"
Fluent Bit Configuration
Configure Fluent Bit to forward events to Splunk HEC:
[OUTPUT]
Name splunk
Match anchore.events
Host ${SPLUNK_HEC_HOST}
Port ${SPLUNK_HEC_PORT}
TLS On
TLS.Verify On
Splunk_Token ${SPLUNK_HEC_TOKEN}
Splunk_Send_Raw Off
Event_Host anchore-enterprise
Event_Sourcetype anchore:events
Event_Index anchore_events
Retry_Limit 5
Configuration Options
| Parameter | Description | Example |
|---|
Host | Splunk HEC hostname | splunk.example.com |
Port | Splunk HEC port | 8088 |
TLS | Enable TLS encryption | On |
TLS.Verify | Verify TLS certificates | On |
Splunk_Token | HEC authentication token | your-token-here |
Splunk_Send_Raw | Send raw JSON events | Off |
Event_Host | Host field value in Splunk | anchore-enterprise |
Event_Sourcetype | Sourcetype for events | anchore:events |
Event_Index | Target Splunk index | anchore_events |
Retry_Limit | Number of retry attempts | 5 |
Environment Variables
Set these environment variables for Fluent Bit:
| Variable | Description | Example |
|---|
SPLUNK_HEC_HOST | Splunk HEC hostname | splunk.example.com |
SPLUNK_HEC_PORT | Splunk HEC port | 8088 |
SPLUNK_HEC_TOKEN | HEC authentication token | your-hec-token |
Security
Store the HEC token securely using Kubernetes Secrets or environment variable injection. Never commit tokens to version control.TLS Configuration
For production deployments, always enable TLS verification:
[OUTPUT]
Name splunk
...
TLS On
TLS.Verify On
TLS.CA_File /path/to/ca-bundle.crt
If using self-signed certificates (not recommended for production):
[OUTPUT]
Name splunk
...
TLS On
TLS.Verify Off
Verification
Step 1: Test HEC Connectivity
Test the HEC endpoint directly:
curl -k -X POST "https://<splunk-host>:8088/services/collector/event" \
-H "Authorization: Splunk <your-token>" \
-d '{"event": "test event from anchore"}'
Expected response:
{"text":"Success","code":0}
Step 2: Check Fluent Bit Logs
Verify Fluent Bit is connecting to Splunk:
# Kubernetes
kubectl logs <fluent-bit-pod> | grep -i splunk
# Docker
docker logs <fluent-bit-container> 2>&1 | grep -i splunk
Look for:
[output:splunk:splunk.0] worker #0 started- No connection errors
Step 3: Search for Events in Splunk
Run a search in Splunk to verify events are arriving:
index=anchore_events sourcetype="anchore:events"
Or search for specific event types:
index=anchore_events event="image.vulnerability_report"
index=anchore_events event="tag.policy_evaluation"
Event Schema
Vulnerability Report Event
{
"event": "image.vulnerability_report",
"timestamp": "2024-01-15T10:30:45.123Z",
"account_name": "admin",
"resource_id": "sha256:abc123...",
"payload": {
"image_digest": "sha256:abc123...",
"total_added": 15,
"total_removed": 3,
"added": [
{
"vulnerability_id": "CVE-2024-1234",
"severity": "Critical",
"package_name": "openssl",
"package_version": "1.1.1k",
"fixed_in": "1.1.1l",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2024-1234"
}
],
"removed": []
}
}
Policy Evaluation Event
{
"event": "tag.policy_evaluation",
"timestamp": "2024-01-15T10:31:00.456Z",
"account_name": "admin",
"resource_id": "docker.io/library/alpine:latest",
"payload": {
"result": "fail",
"policy_id": "default",
"image_digest": "sha256:abc123...",
"findings": [
{
"gate": "vulnerabilities",
"trigger": "package",
"action": "stop",
"message": "Critical vulnerability found: CVE-2024-1234"
}
]
}
}
Splunk Searches
Basic Searches
All Anchore Events:
index=anchore_events sourcetype="anchore:events"
Vulnerability Reports Only:
index=anchore_events event="image.vulnerability_report"
Policy Evaluations Only:
index=anchore_events event="tag.policy_evaluation"
Failed Policy Evaluations:
index=anchore_events event="tag.policy_evaluation" payload.result="fail"
Vulnerability Analysis
Critical Vulnerabilities:
index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| where severity="Critical"
| table _time, account_name, resource_id, vulnerability_id, package_name, fixed_in
Top 10 Most Common CVEs:
index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| stats count by vulnerability_id
| sort -count
| head 10
Vulnerabilities by Severity:
index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| stats count by severity
| sort -count
Images with Most Vulnerabilities:
index=anchore_events event="image.vulnerability_report"
| stats sum(payload.total_added) as total_vulns by resource_id
| sort -total_vulns
| head 10
Policy Analysis
Policy Violations by Gate:
index=anchore_events event="tag.policy_evaluation" payload.result="fail"
| spath path=payload.findings{} output=findings
| mvexpand findings
| spath input=findings
| stats count by gate
| sort -count
Recent Policy Failures:
index=anchore_events event="tag.policy_evaluation" payload.result="fail"
| table _time, account_name, resource_id, payload.policy_id
| sort -_time
| head 20
Dashboards
Creating a Vulnerability Dashboard
Create a new dashboard in Splunk with the following panels:
Panel 1: Vulnerability Count Over Time
index=anchore_events event="image.vulnerability_report"
| timechart sum(payload.total_added) as "New Vulnerabilities"
Panel 2: Severity Distribution
index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| stats count by severity
Panel 3: Top Vulnerable Images
index=anchore_events event="image.vulnerability_report"
| stats sum(payload.total_added) as vulns by resource_id
| sort -vulns
| head 10
Creating a Policy Compliance Dashboard
Panel 1: Pass/Fail Ratio
index=anchore_events event="tag.policy_evaluation"
| stats count by payload.result
Panel 2: Policy Compliance Over Time
index=anchore_events event="tag.policy_evaluation"
| timechart count by payload.result
Panel 3: Recent Failures
index=anchore_events event="tag.policy_evaluation" payload.result="fail"
| table _time, account_name, resource_id, payload.policy_id
| sort -_time
Alerting
Critical Vulnerability Alert
Create an alert for new critical vulnerabilities:
Search:
index=anchore_events event="image.vulnerability_report"
| spath path=payload.added{} output=vulns
| mvexpand vulns
| spath input=vulns
| where severity="Critical"
| stats count as critical_count by resource_id
| where critical_count > 0
Alert Settings:
- Trigger: Number of results > 0
- Throttle: 1 hour per resource_id
- Action: Email, Slack, or PagerDuty
Policy Failure Alert
Create an alert for policy failures:
Search:
index=anchore_events event="tag.policy_evaluation" payload.result="fail"
| stats count by resource_id, payload.policy_id
Alert Settings:
- Trigger: Number of results > 0
- Throttle: Based on your requirements
- Action: Your preferred notification method
Troubleshooting
No Events in Splunk
Verify HEC is enabled:
curl -k "https://<splunk-host>:8089/services/data/inputs/http?output_mode=json" \
-u admin:<password>
Test HEC endpoint:
curl -k -X POST "https://<splunk-host>:8088/services/collector/event" \
-H "Authorization: Splunk <token>" \
-d '{"event": "test"}'
Check Fluent Bit logs for errors:
docker logs <fluent-bit-container> 2>&1 | tail -50
Verify network connectivity:
# From Fluent Bit container
curl -k https://<splunk-host>:8088/services/collector/health
Authentication Errors
If you see 401 Unauthorized errors:
- Verify the HEC token is correct
- Check the token is enabled in Splunk
- Ensure the token has permission to write to the target index
TLS Errors
If you see certificate errors:
- Verify the CA certificate is correct
- Check certificate chain is complete
- For testing only: Set
TLS.Verify Off (not recommended for production)
Missing Fields
If fields are not appearing in Splunk:
- Verify the sourcetype is set correctly
- Check field extractions in Splunk
- Use
spath command to extract JSON fields in searches
High Volume Environments
For high-volume deployments:
Increase Fluent Bit workers:
Enable compression:
[OUTPUT]
Name splunk
...
compress gzip
Batch events:
[OUTPUT]
Name splunk
...
Batch_Size 2048
Splunk Indexer Optimization
- Create a dedicated index for Anchore events
- Configure appropriate retention policies
- Consider using indexed extractions for frequently searched fields
Next Steps