Migrate from Splunk
Replace Splunk's expensive licensing model with LogWard's self-hosted solution. Get native Sigma rules support for security detection without vendor lock-in.
Why Migrate from Splunk?
Eliminate License Costs
Splunk charges per GB/day indexed. Enterprise customers often pay $50K-$500K+/year. LogWard is open-source with only infrastructure costs.
Sigma Rules (Industry Standard)
Replace Splunk's proprietary SPL with standard Sigma detection rules. Access 2000+ community rules from SigmaHQ.
Simpler Architecture
No more indexer clusters, search heads, or deployment servers. LogWard runs as a single Docker Compose stack.
No Data Limits
No daily indexing limits. Ingest as much data as your infrastructure can handle without worrying about license overages.
Feature Comparison
| Feature | Splunk | LogWard |
|---|---|---|
| Log Ingestion | HEC, Forwarders | HTTP API, SDKs, OTLP |
| Query Language | SPL (proprietary) | REST API + Full-text |
| Full-text Search | Yes | Yes |
| Real-time Streaming | Yes | SSE |
| Alerts | Yes | Yes |
| Detection Rules | Splunk ES (extra license) | Sigma (included) |
| MITRE ATT&CK | Splunk ES | Included |
| Incident Management | Splunk ES / SOAR | Included |
| OpenTelemetry | Partial | Native OTLP |
| Self-hosted | Yes (licensed) | Yes (free) |
| Pricing | $150-$1800/GB/day | Infrastructure only |
Step 1: Inventory Your Splunk Setup
Document your existing Splunk configuration:
What to Document
- Data inputs: Universal Forwarders, HEC endpoints, scripted inputs
- Indexes: List all indexes and their retention settings
- Saved searches: Export scheduled searches and alerts
- Dashboards: Document key dashboards and visualizations
- Props/transforms: Document field extractions and parsing rules
Export Splunk configuration using the REST API:
# Export saved searches (alerts)
curl -k -u admin:password \
"https://splunk:8089/servicesNS/-/-/saved/searches?output_mode=json" \
> saved_searches.json
# Export dashboards
curl -k -u admin:password \
"https://splunk:8089/servicesNS/-/-/data/ui/views?output_mode=json" \
> dashboards.json
# List all indexes
curl -k -u admin:password \
"https://splunk:8089/services/data/indexes?output_mode=json"Step 2: Deploy LogWard
See the Deployment Guide for full instructions. Quick start:
# Clone LogWard
git clone https://github.com/logward-dev/logward.git
cd logward/docker
# Configure
cp .env.example .env
# Edit .env with your settings
# Start
docker compose up -d
# Verify
curl http://localhost:8080/healthCreate your organization and project via the UI, then generate an API key.
Step 3: Replace Universal Forwarder
Replace Splunk Universal Forwarder with Fluent Bit to send logs to LogWard.
# inputs.conf
[monitor:///var/log/app/*.log]
index = main
sourcetype = app_logs
# outputs.conf
[tcpout]
defaultGroup = splunk_indexers
[tcpout:splunk_indexers]
server = splunk-indexer:9997[SERVICE]
Flush 1
Log_Level info
[INPUT]
Name tail
Path /var/log/app/*.log
Tag app.*
[OUTPUT]
Name http
Match *
Host logward.internal
Port 8080
URI /api/v1/ingest
Format json
Header X-API-Key lp_xxxHEC Migration
If you're using Splunk HTTP Event Collector, migrate to LogWard's HTTP API:
curl -X POST \
"https://splunk:8088/services/collector" \
-H "Authorization: Splunk HEC_TOKEN" \
-d '{
"event": "User logged in",
"sourcetype": "app_logs",
"index": "main"
}'curl -X POST \
"http://logward:8080/api/v1/ingest" \
-H "X-API-Key: lp_xxx" \
-H "Content-Type: application/json" \
-d '{
"logs": [{
"service": "app",
"level": "info",
"message": "User logged in"
}]
}'Step 4: Query Migration (SPL to LogWard)
Splunk uses SPL (Search Processing Language). LogWard uses REST API parameters. Here's how to translate common SPL queries:
| SPL Query | LogWard API |
|---|---|
| index=main sourcetype=app_logs | GET /api/v1/logs?service=app |
| index=main level=ERROR | GET /api/v1/logs?level=error |
| index=main "connection failed" | GET /api/v1/logs?q=connection%20failed |
| index=main earliest=-1h | GET /api/v1/logs?from=2025-01-15T11:00:00Z |
| index=main | stats count by host | GET /api/v1/logs/aggregated?interval=1h |
Step 5: Alert Migration
Convert Splunk saved searches (alerts) to LogWard alert rules:
# savedsearches.conf
[High Error Rate]
search = index=main level=ERROR
| stats count
| where count > 100
cron_schedule = */5 * * * *
alert_type = number of events
alert_threshold = 100
action.email.to = team@example.com{
"name": "High Error Rate",
"enabled": true,
"level": ["error"],
"threshold": 100,
"timeWindow": 5,
"emailRecipients": [
"team@example.com"
]
}Step 6: Security Detection Migration
If you're using Splunk Enterprise Security, migrate to LogWard's Sigma-based detection:
Benefits of Sigma Rules
- Industry standard format (not vendor-locked)
- 2000+ community rules from SigmaHQ
- MITRE ATT&CK mapping included
- No additional licensing required
Example Sigma rule for detecting suspicious PowerShell:
title: Suspicious PowerShell Command
status: stable
level: high
logsource:
category: process_creation
product: windows
detection:
selection:
CommandLine|contains:
- '-enc'
- '-EncodedCommand'
- 'IEX'
- 'Invoke-Expression'
condition: selection
tags:
- attack.execution
- attack.t1059.001Import Sigma rules via the LogWard UI at /dashboard/security/sigma.
Concept Mapping
| Splunk Term | LogWard Equivalent | Notes |
|---|---|---|
| Index | Project | One Splunk index = One LogWard project |
| Sourcetype | Service | Use service field to differentiate log sources |
| Host | metadata.host | Store in metadata JSON field |
| Source | metadata.source | Store in metadata JSON field |
| Universal Forwarder | Fluent Bit / SDK | Use Fluent Bit or application SDK |
| HEC | POST /api/v1/ingest | HTTP API endpoint |
| Saved Search | Alert Rule | Threshold-based alerts |
| Enterprise Security | Sigma Rules + SIEM | Built-in, no extra license |
| props.conf / transforms.conf | N/A (auto JSON parsing) | Send structured JSON logs |
Common Issues
time field (ISO 8601 format), not _time.
Ensure your log shipper sets the correct timestamp field.