Docker Containers

LogZilla documentation for Docker Containers

Docker Containers Used by LogZilla

LogZilla operates by means of multiple docker containers handling various facets of its operation. The following are the containers used:

Container NamePurpose
lz_aggregatesmodule-1provides aggregates for events
lz_celerybeatadvances the internal task queue
lz_celeryworkercontrols the execution of LogZilla modules
lz_dictionarymodulehandles user tags
lz_etcdconfiguration data for use by all containers
lz_feedersends batch data from file to LogZilla
lz_forwardermoduleforwards events (for ex. after deduping)
lz_frontLogZilla web UI
lz_gunicornhosting of the API
lz_influxdbprocessed log/event data storage
lz_logcollectorcollects and combines logs from the various LogZilla containers
lz_mailermail send service
lz_parsermoduleparses log events against rules
lz_postgrespermanent data storage (dashboards, triggers, rules, etc.)
lz_queryeventsmodule-1handles query Lifecycle
lz_queryupdatemoduleupdates redis with query results
lz_redisin-memory data storage of temp data like query results
lz_secsimple event correlator
lz_storagemodule-1read/write activities on event data
lz_sysloghandling of incoming syslog events
lz_telegrafmaintains metrics of LogZilla performance
lz_tornadoAPI websocket support
lz_triggerexec-1234567890example of a dynamic container used to run custom scripts
lz_triggersactionmoduletriggers handling
lz_aiAI services for LogZilla (available when the AI feature is enabled; depends on lz_qdrant)
lz_qdrantVector database used by AI features (runs only when AI is enabled; persistent volume qdrant)
lz_watchermonitors and maintains the LogZilla docker containers

Note: AI containers appear only when the AI feature is enabled and configured in the web interface under Settings → System → AI or via the CLI with the logzilla settings update command. (AI_ENABLED true with API_KEY and MODEL_NAME). When enabled, the web interface proxies AI under the /ai path.

Overview and Architecture Placement

LogZilla uses a set of Docker containers that cooperate to receive, normalize, store, query, and present log data. These containers are grouped by function so the platform can scale pieces independently and restart modules without impacting the entire system.

At a high level:

  • Ingestion enters through lz_syslog and optional HTTP receivers.
  • Processing and parsing occur in lz_parsermodule, with supporting modules such as lz_dictionarymodule and lz_forwardermodule for enrichment and forwarding.
  • Durable application state is stored in lz_postgres; time-series metrics in lz_influxdb; ephemeral caches and queues in lz_redis.
  • Queries are executed by the query and storage modules and returned through the API (lz_gunicorn) and UI proxy (lz_front).
  • Background workers and the watcher manage orchestration and periodic tasks.

Container Groups and Roles

Frontend and API

  • lz_front
    • Reverse proxy and web UI delivery layer.
    • Terminates HTTP/HTTPS and proxies API and WebSocket traffic to internal services.
  • lz_gunicorn
    • REST API service used by the UI and CLI.
    • Applies access control, executes administrative endpoints, and brokers queries.
  • lz_tornado
    • WebSocket service for live updates to the UI and CLI features that stream results.

Workers and Orchestration

  • lz_celeryworker
    • Executes background jobs that handle indexing, maintenance, and other asynchronous work.
  • lz_celerybeat
    • Schedules periodic tasks for the platform.
  • lz_watcher
    • Supervises containers, ensures expected processes are present, and re-creates them when changes occur.

Ingestion and Parsing

  • lz_syslog
    • Receives syslog events over UDP/TCP (BSD, RFC 5424) and supports TLS as configured. Also receives raw text and JSON on dedicated ports.
    • Port defaults are documented in Network Communications. Settings live under syslogng in Server Settings.
  • lz_logcollector
    • Collects and consolidates internal container logs for diagnostics.
  • lz_parsermodule
    • Applies rules to normalize and enrich events.
    • Works together with user-provided rules and installed applications.
  • lz_dictionarymodule
    • Maintains user tags and dictionary-style enrichments.
  • lz_forwardermodule
    • Forwards events to external systems according to configured rules.

Query and Storage Pipeline

  • lz_storagemodule-1
    • Manages read/write of event data and indexing operations.
  • lz_aggregatesmodule-1
    • Maintains aggregated datasets to accelerate common queries.
  • lz_queryeventsmodule-1
    • Executes search and analytic queries over the event store.
  • lz_queryupdatemodule
    • Publishes query results to Redis for consumption by UI/API.

Triggers and Actions

  • lz_triggersactionmodule
    • Evaluates triggers and dispatches actions.
  • lz_triggerexec-<id> (dynamic)
    • Ephemeral container created to run custom scripts for specific trigger executions; disappears when the script finishes.

Datastores and Metrics

  • lz_postgres
    • Durable application state: dashboards, triggers, users, roles, settings.
  • lz_influxdb
    • Time-series metrics related to system performance and internal telemetry.
  • lz_redis
    • Ephemeral cache and queues (e.g., query results and task coordination).
  • lz_telegraf
    • Collects internal metrics from the platform.
  • lz_mailer
    • Outbound SMTP service to send notifications and test emails.

AI (Conditional)

  • lz_ai
    • AI application backend that powers Copilot features when enabled.
    • Available only when AI is enabled and configured.
  • lz_qdrant
    • Vector database backing AI features. Persistent volume name: qdrant.
    • Runs only when AI is enabled.

Enablement: set AI_ENABLED=true and configure API_KEY and MODEL_NAME in the ai settings group. When enabled, lz_front proxies the AI backend under /ai.

Lifecycle and Control

Container lifecycle is managed through the LogZilla CLI (see System Commands for full details). For Docker-specific troubleshooting, check container status:

bash
docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Image}}'

Logs and Troubleshooting

Primary platform log:

bash
sudo tail -f /var/log/logzilla/logzilla.log

Per-container logs (examples):

bash
docker logs -n 100 lz_front
docker logs -n 100 lz_syslog
docker logs -n 100 lz_gunicorn

Quick checks:

  • Frontend is reachable on configured HTTP/HTTPS ports.
  • Syslog receivers are listening on the configured ports (see Network Communications for defaults and options).
  • Postgres and Redis are running and healthy.

If a module configuration has been updated and does not appear to take effect, use a targeted restart for that component.

Data Persistence and Volumes

The platform persists data across container restarts via Docker volumes.

  • Postgres stores durable application data.
  • Archive data is managed by the storage pipeline (see Data Archiving and Retention for lifecycle and relocation guidance).
  • InfluxDB stores time-series metrics.
  • Qdrant stores AI vectors when AI is enabled (volume qdrant).
  • Redis provides ephemeral caching and queues and is not a durability layer.

Before upgrades or large configuration changes, snapshots and backups are recommended. See Offline Installs and Upgrades.

Networking Model

lz_front terminates HTTP/HTTPS and proxies to internal services by container name. Syslog receiver ports are configurable; defaults are documented in Network Communications. Name resolution for containers uses the Docker network and service names. When custom host mappings are needed, use /etc/logzilla/hosts.in (see Custom DNS).

Security and Best Practices

  • Use HTTPS in production for the web UI and API. See Using HTTPS.
  • Restrict inbound syslog ports to authorized sources using firewalls and network policy.
  • Prefer the logzilla CLI for lifecycle and configuration changes.
  • Take snapshots/backups before upgrades or significant configuration changes.
  • Monitor disk utilization for Postgres, archives, and InfluxDB volumes.
Docker Containers | LogZilla Documentation