PostgreSQL Audit Log Collection
Configure PostgreSQL to emit comprehensive audit logs and forward them to Purl for slow query monitoring, error tracking, and compliance auditing.
Prerequisites
- ✓ PostgreSQL 14+ with superuser access
- ✓ Purl instance running and accessible
- ✓ Fluent Bit installed on the database server
Configure PostgreSQL Logging
Enable comprehensive logging in postgresql.conf to capture queries, errors, and connection events.
# Enable logging collector
logging_collector = on
log_directory = '/var/log/postgresql'
log_filename = 'postgresql-%Y-%m-%d.log'
log_rotation_age = 1d
log_rotation_size = 100MB
# Log format — CSV for structured parsing
log_destination = 'csvlog'
# What to log
log_statement = 'all' # 'none', 'ddl', 'mod', 'all'
log_min_duration_statement = 100 # Log queries slower than 100ms
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0 # Log all temp file creation
# Useful context
log_line_prefix = '%t [%p] %u@%d '
log_checkpoints = on
log_autovacuum_min_duration = 0# Reload PostgreSQL to apply changes
sudo systemctl reload postgresql
# Or: SELECT pg_reload_conf();Forward Logs with Fluent Bit
Configure Fluent Bit to tail PostgreSQL log files and forward to Purl with proper source tagging.
[SERVICE]
Flush 5
Daemon Off
Log_Level info
[INPUT]
Name tail
Tag postgresql
Path /var/log/postgresql/postgresql-*.csv
Refresh_Interval 5
Read_from_Head On
[FILTER]
Name modify
Match postgresql
Add source postgresql
Add level info
[FILTER]
Name grep
Match postgresql
Regex log ERROR|FATAL|WARNING|PANIC
# Optional: Only forward errors and slow queries
# Remove the grep filter above to forward all logs
[OUTPUT]
Name http
Match *
Host your-purl-host
Port 3000
URI /api/v1/logs
Format json
Header X-API-Key your-api-key
Header Content-Type application/jsonSet Up Pipeline Parsing
Create a Purl pipeline to parse PostgreSQL CSV log format and extract structured fields like duration, query, and error code.
# PostgreSQL CSV log fields:
# log_time, user_name, database_name, process_id, connection_from,
# session_id, session_line_num, command_tag, session_start_time,
# virtual_transaction_id, transaction_id, error_severity,
# sql_state_code, message, detail, hint, internal_query,
# internal_query_pos, context, query, query_pos, location
# Pipeline rule: CSV extract
# Source filter: source = "postgresql"
# Rule type: json_extract (for CSV parsed as JSON by Fluent Bit)
# Extract: error_severity → level, message → message,
# user_name → user, database_name → database,
# duration → duration_msSet Up Slow Query Alerts
Create alerts to catch slow queries, connection failures, and database errors before they impact users.
# Alert: Slow Queries (> 5 seconds)
# Query: source:postgresql AND "duration" AND duration:>5000
# Threshold: > 3 matches in 5 minutes
# Channel: Slack #db-alerts
# Alert: Database Errors
# Query: source:postgresql AND (FATAL OR PANIC OR ERROR)
# Threshold: > 1 match in 1 minute
# Channel: Telegram
# Alert: Connection Failures
# Query: source:postgresql AND "connection" AND "refused"
# Threshold: > 5 matches in 5 minutes
# Channel: Webhook (PagerDuty)Compliance Note
For SOC 2 and HIPAA compliance, ensure log_statement = 'all' is set and Purl's Enterprise audit log feature is enabled. This provides a complete audit trail of all database operations with tamper-evident storage in ClickHouse.