Advanced Query Types
LogZilla documentation for Advanced Query Types
Advanced Query Types and System Monitoring
LogZilla provides specialized query types for system monitoring, performance analysis, and advanced data aggregation. These queries enable deep insights into both log data patterns and LogZilla system health.
System Performance Queries
System queries provide real-time and historical data about LogZilla host performance. These queries are essential for monitoring system health and capacity planning.
System CPU Usage
Monitor CPU utilization across different categories and time periods.
Parameters (cpu-params.json):
json{
"time_range": {
"preset": "last_24_hours"
},
"cpu": "totals"
}
CPU Options:
"totals"
- Aggregate across all CPU cores0
,1
,2
, etc. - Specific CPU core number
Execute Query:
bashlogzilla query --type System_CPU --params cpu-params.json --authtoken $TOKEN
Result Categories:
user
- CPU used by user applicationssystem
- CPU used by operating systemidle
- CPU not doing workwait
- CPU waiting for disk I/Ointerrupt
- CPU handling hardware interruptssoftirq
- CPU servicing soft interruptsnice
- CPU for process priority managementsteal
- CPU allocated by hypervisor (virtualized systems)
System Memory Usage
Track memory utilization and identify memory pressure.
Parameters (memory-params.json):
json{
"time_range": {
"preset": "last_12_hours"
}
}
Execute Query:
bashlogzilla query --type System_Memory --params memory-params.json --authtoken $TOKEN
Memory Categories:
used
- Memory used by processesfree
- Available memorybuffered
- Memory used for I/O bufferscached
- Memory used for disk cache
System Disk Usage
Monitor disk space utilization across filesystems.
Parameters (disk-params.json):
json{
"time_range": {
"preset": "last_7_days"
},
"fs": "root"
}
Filesystem Options:
"root"
- Root filesystem (always available)- System-specific mount points (varies by configuration)
Execute Query:
bashlogzilla query --type System_DF --params disk-params.json --authtoken $TOKEN
System I/O Operations
Track disk I/O performance and identify bottlenecks.
Parameters (iops-params.json):
json{
"time_range": {
"preset": "last_6_hours"
}
}
Execute Query:
bashlogzilla query --type System_IOPSQuery --params iops-params.json --authtoken $TOKEN
I/O Metrics:
reads
- Read operations per secondwrites
- Write operations per second
System Network Usage
Monitor network interface utilization and performance.
Parameters (network-params.json):
json{
"time_range": {
"preset": "last_24_hours"
},
"interface": "eth0"
}
Execute Query:
bashlogzilla query --type System_Network --params network-params.json --authtoken $TOKEN
Network Metrics:
if_packets.tx
- Packets transmittedif_packets.rx
- Packets receivedif_octets.tx
- Bytes transmittedif_octets.rx
- Bytes receivedif_errors.tx
- Transmission errorsif_errors.rx
- Reception errors
System Network Errors
Focus specifically on network error conditions.
Parameters (net-errors-params.json):
json{
"time_range": {
"preset": "last_24_hours"
},
"interface": "eth0"
}
Execute Query:
bashlogzilla query --type System_NetworkErrors --params net-errors-params.json --authtoken $TOKEN
Error Metrics:
drop_in
- Incoming packets droppeddrop_out
- Outgoing packets droppederr_in
- Incoming packet errorserr_out
- Outgoing packet errors
LogZilla Internal Queries
Storage Statistics
Monitor LogZilla's internal storage performance and utilization.
Parameters (storage-params.json):
json{
"time_range": {
"preset": "last_24_hours"
}
}
Execute Query:
bashlogzilla query --type StorageStats --params storage-params.json --authtoken $TOKEN
Storage Metrics:
new
- New events processed (not duplicates)duplicates
- Duplicate events identifiedtotal
- Total events processed
Processing Statistics
Track LogZilla's event processing performance.
Note: Requires
INTERNAL_COUNTERS_MAX_LEVEL
set toDEBUG
Enable Processing Stats:
bashlogzilla settings INTERNAL_COUNTERS_MAX_LEVEL=DEBUG
Parameters (processing-params.json):
json{
"time_range": {
"preset": "last_6_hours"
}
}
Execute Query:
bashlogzilla query --type ProcessingStats --params processing-params.json --authtoken $TOKEN
Processing Metrics:
new
- New events processedduplicates
- Duplicate events foundoot
- Out-of-time events (outside TIME_TOLERANCE)
Advanced Data Queries
LastN Query
Retrieve the most recent values for a specific field, useful for finding latest activity or recent changes.
Parameters (lastn-params.json):
json{
"field": "host",
"limit": 20,
"time_range": {
"preset": "last_7_days"
},
"filter": [
{
"field": "program",
"op": "eq",
"value": ["kernel"]
}
]
}
Execute Query:
bashlogzilla query --type LastN --params lastn-params.json --authtoken $TOKEN
Result Fields:
name
- Field valuecount
- Occurrence count in time rangelast_seen
- Timestamp of most recent occurrence
Advanced TopN Features
TopN queries support advanced aggregation and subfield analysis.
Parameters with Subfields (topn-advanced.json):
json{
"field": "host",
"limit": 10,
"time_range": {
"preset": "last_24_hours"
},
"with_subperiods": true,
"subfields": ["program", "severity"],
"subfields_limit": 5,
"show_other": true
}
Advanced Options:
with_subperiods
- Include data for each time sub-periodsubfields
- Show breakdown by additional fieldssubfields_limit
- Limit subfield resultsshow_other
- Include "other" category for remaining valuestop_periods
- Show top sub-periods by activity
Administrative Queries
Notifications Query
Retrieve notification group information and associated events.
Parameters (notifications-params.json):
json{
"sort": "Newest first",
"time_range": {
"preset": "last_7_days"
},
"time_range_field": "created_at",
"is_private": false,
"read": false,
"with_events": true
}
Sort Options:
"Oldest first"
"Newest first"
"Oldest unread first"
"Newest unread first"
Time Range Fields:
"created_at"
- When notification was created"updated_at"
- When notification was last updated"unread_since"
- When notification became unread"read_at"
- When notification was read
Execute Query:
bashlogzilla query --type Notifications --params notifications-params.json --authtoken $TOKEN
Tasks Query
Retrieve task management information for workflow tracking.
Parameters (tasks-params.json):
json{
"target": "all",
"is_overdue": false,
"is_open": true,
"assigned_to": [],
"sort": ["-created_at"]
}
Target Options:
"assigned_to_me"
- Tasks assigned to current user"all"
- All tasks
Execute Query:
bashlogzilla query --type Tasks --params tasks-params.json --authtoken $TOKEN
Query Result Formats
Understanding Result Structure
All queries return results in a consistent structure:
json{
"query_id": "unique-query-identifier",
"results": {
"totals": {
"ts_from": 1704067200,
"ts_to": 1704153600,
"count": 12345
},
"details": [
{
"ts_from": 1704067200,
"ts_to": 1704070800,
"count": 1234
}
]
}
}
System Query Results
System queries include aggregated statistics:
json{
"totals": {
"usage_cpu": {
"sum": 1234.56,
"count": 240,
"min": 0.1,
"max": 98.7,
"avg": 5.14,
"last": 12.3,
"last_ts": 1704153600
}
}
}
Aggregate Fields:
sum
- Total of all valuescount
- Number of data pointsmin
- Minimum valuemax
- Maximum valueavg
- Average value (sum/count)last
- Most recent valuelast_ts
- Timestamp of most recent value
Performance Considerations
Query Optimization
Time Range Optimization:
- Use specific time ranges rather than very broad ranges
- Consider system performance impact of long historical queries
- Use appropriate step sizes for time-series data
Filter Optimization:
- Apply filters to reduce data set size
- Use indexed fields (host, program, severity) for better performance
- Combine multiple filters efficiently
Archive Considerations:
- Queries with
with_archive: true
require additional processing time - Archive queries may have different performance characteristics
- Consider splitting large archive queries into smaller time ranges
Resource Management
Memory Usage:
- Large result sets consume significant memory
- Use appropriate limits to control memory usage
- Consider pagination for very large datasets
CPU Impact:
- Complex aggregations require CPU resources
- System queries add monitoring overhead
- Schedule resource-intensive queries during off-peak hours
Best Practices
System Monitoring
- Regular health checks using system queries
- Set up automated monitoring for key metrics
- Establish baselines for normal system behavior
- Alert on anomalies in system performance
Query Design
- Start with simple queries and add complexity gradually
- Test query performance before automation
- Use appropriate time ranges for your use case
- Document query purposes and parameters
Data Analysis
- Combine multiple query types for comprehensive analysis
- Use subfields for detailed breakdowns
- Archive historical results for trend analysis
- Validate results against known system behavior
Automation
- Schedule regular reports using system queries
- Implement alerting based on query results
- Create reusable parameter templates for common queries
- Monitor query execution times and optimize as needed
Advanced queries provide deep insights into both log data and system performance. Master these tools to build comprehensive monitoring and analysis workflows.