Event Stream Configuration
Overview
The Data Event Stream feature is configured through the Anchore Enterprise configuration file or Helm values. This page covers the configuration options for enabling event streaming and customizing its behavior.
Prerequisites
Before enabling the event stream:
- Ensure you have a valid license with the Data Stream entitlement
- Plan your shared volume strategy for the data event files. Maximum file size and count will impact storage requirements.
- Determine your destination system (Splunk, Elasticsearch, etc.)
Configuration
Ports
| Component | Port | Purpose |
|---|---|---|
| Fluent Bit Health | 2020 | Health checks and metrics endpoint |
| (example) Splunk HEC | 8088 | HTTP Event Collector ingestion |
Shard Data Files
| Path | Description |
|---|---|
/var/log/anchore/events/ | Default directory for data event files |
/var/log/anchore/events/events.json.* | Rotating data event files (timestamped) |
/var/log/anchore/events/offsets.db | Fluent Bit file position tracking database |
Helm Values (Kubernetes)
To enable event streaming in a Kubernetes deployment using Helm, add the following to your values.yaml:
anchoreConfig:
reports_worker:
event_stream:
enabled: true
max_file_size_mb: 100
max_file_count: 5
cycle_timers:
event_stream_health: 60
Config File (Docker Compose / Standalone)
For Docker Compose or standalone deployments, add the following to your config.yaml:
services:
reports_worker:
event_stream:
enabled: true
max_file_size_mb: 100
max_file_count: 5
cycle_timers:
event_stream_health: 60
Important
Theevent_stream_health cycle timer must be configured for proper operation. Without it processed files will not be cleaned up.Volume Configuration
The Reports Worker and the Fluent Bit sidecar must have read/write access to the data event directory.
Kubernetes
Create a shared volume between the Reports Worker and Fluent Bit:
# In your Helm values or deployment manifest
volumes:
- name: anchore-events
emptyDir: {}
# Reports Worker container
volumeMounts:
- name: anchore-events
mountPath: /var/log/anchore
# Fluent Bit container
volumeMounts:
- name: anchore-events
mountPath: /var/log/anchore
Docker Compose
Use a named volume shared between containers:
volumes:
anchore-events:
services:
reports-worker:
volumes:
- anchore-events:/var/log/anchore
fluent-bit:
volumes:
- anchore-events:/var/log/anchore:ro
File Rotation
Data event files are rotated based on the max_file_size_mb setting. When a file reaches the maximum size, a new file
is created with a timestamp suffix:
events.json.20240115T103045Z
events.json.20240115T114532Z
events.json.20240115T125018Z
The max_file_count setting determines how many files are retained. Older files are deleted after they have been
processed by Fluent Bit (tracked via the position database).
Health Monitoring
The data stream health watcher runs at the interval specified by event_stream_health and performs the following tasks:
- Cleanup: Removes event files that have been fully processed by Fluent Bit
- Emitter Resume Detection: Detects when the data_stream has been suspended and allows it to resumes processing when possible
Viewing Integration Status
The system event notification system provides events related to the data stream health. You can view these events via the API or UI.
Filter on event type system.event_stream.suspend and system.event_stream.resume to see suspension and resumption events.
Verification
After enabling event streaming, verify it is working:
Step 1: Analyze an Image
Analyze a new image to generate vulnerability and policy events:
Step 2: Check Data Event Files
You should see one or more event files with the pattern events.json.*.
Examples below:
# Kubernetes
kubectl exec -it <reports-worker-pod> -- ls -la /var/log/anchore/events/
Defaulted container "enterprise-reportsworker" out of: enterprise-reportsworker, fluent-bit (init)
total 51680
drwxrwsrwx. 2 root anchore 104 Jan 10 18:17 .
drwxrwxr-x. 1 anchore root 113 Jan 10 15:15 ..
-rw-r--r--. 1 anchore anchore 35440931 Jan 10 18:41 events.json.20260110T181643Z
-rw-r--r--. 1 anchore anchore 8192 Jan 10 18:02 offsets.db
-rw-r--r--. 1 anchore anchore 32768 Jan 10 18:41 offsets.db-shm
-rw-r--r--. 1 anchore anchore 4120032 Jan 10 18:41 offsets.db-wal
# Docker
docker exec <reports-worker-container> ls -la /var/log/anchore/events/
total 5092
drwxr-xr-x 2 root root 4096 Jan 10 18:29 .
drwxrwxr-x 3 anchore root 4096 Jan 10 15:21 ..
-rw-r--r-- 1 root root 5163541 Jan 10 15:26 events.json.20260110T152612Z
-rw-r--r-- 1 root root 8192 Jan 10 17:53 offsets.db
-rw-r--r-- 1 root root 32768 Jan 10 18:29 offsets.db-shm
-rw-r--r-- 1 root root 0 Jan 10 18:29 offsets.db-wal
****
Troubleshooting
No Event Files Created
- Verify
enabled: trueis set in the configuration - Check that the Reports Worker has write permissions to the directory
- Ensure the
event_stream_healthcycle timer is configured - Check Reports Worker logs for errors
Events Not Being Processed
- Verify Fluent Bit is running and can read the event files
- Check the position database (
offsets.db) exists and is being updated - Review Fluent Bit logs for connection or parsing errors
Data Stream is Suspended
If the data stream becomes suspended due to unprocessed files accumulating, consider:
- Increase
max_file_size_mbto buffer more date and allow fluent bit to catch up - Increase
max_file_countto retain more files during high-volume periods - Ensure Fluent Bit is keeping up with event production
Next Steps
- Configure Fluent Bit to forward events to your destination
- Set up Splunk integration for security analytics