Migrate from Splunk

Medium 6-12 hours

Replace Splunk's expensive licensing model with LogWard's self-hosted solution. Get native Sigma rules support for security detection without vendor lock-in.

Why Migrate from Splunk?

Eliminate License Costs

Splunk charges per GB/day indexed. Enterprise customers often pay $50K-$500K+/year. LogWard is open-source with only infrastructure costs.

Sigma Rules (Industry Standard)

Replace Splunk's proprietary SPL with standard Sigma detection rules. Access 2000+ community rules from SigmaHQ.

Simpler Architecture

No more indexer clusters, search heads, or deployment servers. LogWard runs as a single Docker Compose stack.

No Data Limits

No daily indexing limits. Ingest as much data as your infrastructure can handle without worrying about license overages.

Feature Comparison

FeatureSplunkLogWard
Log Ingestion HEC, Forwarders HTTP API, SDKs, OTLP
Query LanguageSPL (proprietary)REST API + Full-text
Full-text Search Yes Yes
Real-time Streaming Yes SSE
Alerts Yes Yes
Detection RulesSplunk ES (extra license) Sigma (included)
MITRE ATT&CKSplunk ES Included
Incident ManagementSplunk ES / SOAR Included
OpenTelemetryPartial Native OTLP
Self-hosted Yes (licensed) Yes (free)
Pricing$150-$1800/GB/dayInfrastructure only

Step 1: Inventory Your Splunk Setup

Document your existing Splunk configuration:

What to Document

  • Data inputs: Universal Forwarders, HEC endpoints, scripted inputs
  • Indexes: List all indexes and their retention settings
  • Saved searches: Export scheduled searches and alerts
  • Dashboards: Document key dashboards and visualizations
  • Props/transforms: Document field extractions and parsing rules

Export Splunk configuration using the REST API:

bash
# Export saved searches (alerts)
curl -k -u admin:password \
  "https://splunk:8089/servicesNS/-/-/saved/searches?output_mode=json" \
  > saved_searches.json

# Export dashboards
curl -k -u admin:password \
  "https://splunk:8089/servicesNS/-/-/data/ui/views?output_mode=json" \
  > dashboards.json

# List all indexes
curl -k -u admin:password \
  "https://splunk:8089/services/data/indexes?output_mode=json"

Step 2: Deploy LogWard

See the Deployment Guide for full instructions. Quick start:

bash
# Clone LogWard
git clone https://github.com/logward-dev/logward.git
cd logward/docker

# Configure
cp .env.example .env
# Edit .env with your settings

# Start
docker compose up -d

# Verify
curl http://localhost:8080/health

Create your organization and project via the UI, then generate an API key.

Step 3: Replace Universal Forwarder

Replace Splunk Universal Forwarder with Fluent Bit to send logs to LogWard.

Before (Splunk)
ini
# inputs.conf
[monitor:///var/log/app/*.log]
index = main
sourcetype = app_logs

# outputs.conf
[tcpout]
defaultGroup = splunk_indexers

[tcpout:splunk_indexers]
server = splunk-indexer:9997
After (Fluent Bit)
ini
[SERVICE]
    Flush         1
    Log_Level     info

[INPUT]
    Name          tail
    Path          /var/log/app/*.log
    Tag           app.*

[OUTPUT]
    Name          http
    Match         *
    Host          logward.internal
    Port          8080
    URI           /api/v1/ingest
    Format        json
    Header        X-API-Key lp_xxx

HEC Migration

If you're using Splunk HTTP Event Collector, migrate to LogWard's HTTP API:

Before (Splunk HEC)
bash
curl -X POST \
  "https://splunk:8088/services/collector" \
  -H "Authorization: Splunk HEC_TOKEN" \
  -d '{
    "event": "User logged in",
    "sourcetype": "app_logs",
    "index": "main"
  }'
After (LogWard API)
bash
curl -X POST \
  "http://logward:8080/api/v1/ingest" \
  -H "X-API-Key: lp_xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "logs": [{
      "service": "app",
      "level": "info",
      "message": "User logged in"
    }]
  }'

Step 4: Query Migration (SPL to LogWard)

Splunk uses SPL (Search Processing Language). LogWard uses REST API parameters. Here's how to translate common SPL queries:

SPL QueryLogWard API
index=main sourcetype=app_logsGET /api/v1/logs?service=app
index=main level=ERRORGET /api/v1/logs?level=error
index=main "connection failed"GET /api/v1/logs?q=connection%20failed
index=main earliest=-1hGET /api/v1/logs?from=2025-01-15T11:00:00Z
index=main | stats count by hostGET /api/v1/logs/aggregated?interval=1h

Step 5: Alert Migration

Convert Splunk saved searches (alerts) to LogWard alert rules:

Splunk Alert
ini
# savedsearches.conf
[High Error Rate]
search = index=main level=ERROR
  | stats count
  | where count > 100
cron_schedule = */5 * * * *
alert_type = number of events
alert_threshold = 100
action.email.to = team@example.com
LogWard Alert Rule
json
{
  "name": "High Error Rate",
  "enabled": true,
  "level": ["error"],
  "threshold": 100,
  "timeWindow": 5,
  "emailRecipients": [
    "team@example.com"
  ]
}

Step 6: Security Detection Migration

If you're using Splunk Enterprise Security, migrate to LogWard's Sigma-based detection:

Benefits of Sigma Rules

  • Industry standard format (not vendor-locked)
  • 2000+ community rules from SigmaHQ
  • MITRE ATT&CK mapping included
  • No additional licensing required

Example Sigma rule for detecting suspicious PowerShell:

yaml
title: Suspicious PowerShell Command
status: stable
level: high
logsource:
    category: process_creation
    product: windows
detection:
    selection:
        CommandLine|contains:
            - '-enc'
            - '-EncodedCommand'
            - 'IEX'
            - 'Invoke-Expression'
    condition: selection
tags:
    - attack.execution
    - attack.t1059.001

Import Sigma rules via the LogWard UI at /dashboard/security/sigma.

Concept Mapping

Splunk TermLogWard EquivalentNotes
IndexProjectOne Splunk index = One LogWard project
SourcetypeServiceUse service field to differentiate log sources
Hostmetadata.hostStore in metadata JSON field
Sourcemetadata.sourceStore in metadata JSON field
Universal ForwarderFluent Bit / SDKUse Fluent Bit or application SDK
HECPOST /api/v1/ingestHTTP API endpoint
Saved SearchAlert RuleThreshold-based alerts
Enterprise SecuritySigma Rules + SIEMBuilt-in, no extra license
props.conf / transforms.confN/A (auto JSON parsing)Send structured JSON logs

Common Issues

Field extraction differences
Splunk auto-extracts fields with props.conf. LogWard expects JSON logs. Use Fluent Bit parsers to structure logs before sending, or update your application to emit JSON.
SPL queries don't translate directly
Complex SPL with pipes, stats, and evals need to be rethought. Use LogWard's aggregation API for time-series stats. For complex transformations, consider processing in your application.
Missing _time field
LogWard uses time field (ISO 8601 format), not _time. Ensure your log shipper sets the correct timestamp field.

Next Steps