Why Migrate from ELK?
The ELK Stack (Elasticsearch, Logstash, Kibana) has been the de facto standard for log management for over a decade. But it comes with significant operational overhead:
- Resource hungry: Elasticsearch requires 3+ nodes for production, each needing 16GB+ RAM
- JVM tuning nightmares: Heap size, garbage collection pauses, circuit breakers
- Shard management: Index lifecycle policies, shard allocation, rebalancing
- Complex pipeline: Logstash → Elasticsearch → Kibana = 3 separate systems to maintain
Purl replaces all of this with a single binary + ClickHouse, using 10x less resources while delivering faster query performance.
Architecture Mapping
Here's how ELK components map to Purl:
- Logstash → Purl Ingest API (
/api/v1/logs) or OTLP endpoint - Beats/Filebeat → Fluent Bit or OpenTelemetry Collector pointing to Purl
- Elasticsearch → ClickHouse (columnar storage, 10-20x better compression)
- Kibana → Purl Dashboard (built-in web UI)
- Elasticsearch Query DSL → Purl also supports ES-compatible
_searchendpoint
Step 1: Prepare Your Purl Instance
Install Purl alongside your existing ELK stack. Both can run simultaneously during migration.
curl -fsSL https://purlogs.com/install.sh | sudo bash -s -- -i
This installs Purl + ClickHouse via Docker Compose. Verify it's running:
curl http://localhost:3000/api/health
# {"status":"ok","clickhouse":"connected"}
Step 2: Dual-Write Log Pipeline
Configure your log shippers to send to both ELK and Purl simultaneously. If you're using Filebeat, add a second output:
# filebeat.yml - add Purl as HTTP output
output.http:
hosts: ["http://your-purl:3000/api/v1/logs"]
headers:
X-API-Key: "your-api-key"
Alternatively, deploy Fluent Bit alongside your existing pipeline:
[OUTPUT]
Name http
Match *
Host your-purl-host
Port 3000
URI /api/v1/logs
Format json
Header X-API-Key your-api-key
Step 3: Migrate Saved Searches and Alerts
Map your Kibana saved searches to Purl:
- KQL queries work in Purl with the same syntax (
level:error AND service:api) - Kibana dashboards → Purl custom dashboards (drag-and-drop builder)
- Watcher alerts → Purl alerts with Telegram, Slack, and Webhook channels
Step 4: Verify Data Parity
Run the same queries on both systems and compare results:
# Elasticsearch
curl "http://es:9200/logs-*/_search?q=level:error&size=100"
Step 5: Cut Over
Once you've verified data parity for 24-48 hours:
- 1Update all log shippers to point only to Purl
- 2Update team bookmarks from Kibana to Purl dashboard
- 3Disable Logstash and Elasticsearch
- 4Reclaim the freed resources (typically 16-64GB RAM)
Results You Can Expect
Teams typically report:
- 80% reduction in infrastructure costs
- 10x improvement in query speed
- Zero operational overhead — no more shard management
- 5-minute recovery — just
docker compose upvs hours of cluster recovery