Migrate from Datadog

Medium 4-8 hours

Migrate from Datadog's proprietary platform to LogWard and save up to 90% on log management costs while gaining full data ownership and built-in SIEM capabilities.

Why Migrate from Datadog?

Massive Cost Savings

Datadog charges $0.10-$1.70/GB for log ingestion. A 500 GB/day deployment can cost $15,000+/month. LogWard is self-hosted with zero per-GB fees.

Full Data Ownership

Your logs never leave your infrastructure. No data sent to third parties. Full GDPR compliance with EU data sovereignty.

Built-in SIEM

Sigma detection rules, threat detection, and incident management included. Datadog Cloud SIEM costs extra ($0.20/GB on top of log costs).

Unlimited Users

No per-seat licensing. Add your entire team without worrying about per-user costs or role-based pricing tiers.

Feature Comparison

FeatureDatadogLogWard
Log Ingestion (HTTP API) Yes Yes
SDKs (Node.js, Python, etc.) Yes Yes
OpenTelemetry SupportPartial (Logs only) Native OTLP
Full-text Search Yes Yes
Real-time Streaming Yes SSE
Alert Rules Yes Yes
Email/Webhook Notifications Yes Yes
Trace Correlation Yes Yes
Sigma Rules (SIEM) No Built-in
Incident ManagementCloud SIEM ($0.20/GB) Included
MITRE ATT&CK MappingCloud SIEM Included
Self-hosted Option No Yes
Pricing$0.10-$1.70/GB + per-userInfrastructure only

Step 1: Inventory Your Datadog Setup

Before migrating, document your existing Datadog configuration:

What to Document

  • Log sources: List all services/hosts sending logs to Datadog
  • Log volume: Check your usage dashboard for average GB/day
  • Active monitors: Export all log-based monitors via API
  • Dashboards: Screenshot or export critical dashboards
  • Log pipelines: Document parsing rules and processors

Export your Datadog configuration using the API:

bash
# Export all log monitors
curl -X GET "https://api.datadoghq.com/api/v1/monitor" \
  -H "DD-API-KEY: ${DD_API_KEY}" \
  -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" > monitors.json

# Export all dashboards
curl -X GET "https://api.datadoghq.com/api/v1/dashboard" \
  -H "DD-API-KEY: ${DD_API_KEY}" \
  -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" > dashboards.json

Step 2: Deploy LogWard

Follow the Deployment Guide to set up LogWard. Here's a quick start:

bash
# Clone and configure
git clone https://github.com/logward-dev/logward.git
cd logward/docker

# Copy and edit environment
cp .env.example .env
# Edit .env: Set PUBLIC_API_URL, database passwords, etc.

# Start LogWard
docker compose up -d

# Verify deployment
curl http://localhost:8080/health

Recommended Specs (for 500 GB/day)

CPU: 8 cores
RAM: 32 GB
Disk: 2 TB SSD
Network: 1 Gbps

After deployment, create your organization and project via the UI at http://localhost:3000, then generate an API key from the project settings.

Step 3: SDK Migration

Replace Datadog SDK with LogWard SDK. The API is similar, making migration straightforward.

Node.js Migration

Before (Datadog)
typescript
import { datadogLogs } from '@datadog/browser-logs';

datadogLogs.init({
  clientToken: 'pub_xxx',
  site: 'datadoghq.com',
  service: 'my-app',
  env: 'production',
});

datadogLogs.logger.info('User logged in', {
  userId: 123,
  email: 'user@example.com'
});
After (LogWard)
typescript
import { LogWardClient } from '@logward-dev/sdk-node';

const client = new LogWardClient({
  apiUrl: 'http://logward.internal:8080',
  apiKey: 'lp_xxx',
  globalMetadata: { env: 'production' }
});

client.info('my-app', 'User logged in', {
  userId: 123,
  email: 'user@example.com'
});

Python Migration

Before (Datadog)
python
from datadog import initialize, statsd
from ddtrace import tracer

initialize(api_key='xxx', app_key='yyy')

@tracer.wrap()
def process_request():
    statsd.increment('requests')
    # Datadog auto-instruments logs
After (LogWard)
python
from logward_sdk import LogWardClient

client = LogWardClient(
    api_url='http://logward.internal:8080',
    api_key='lp_xxx',
    global_metadata={'env': 'production'}
)

def process_request():
    client.info('api', 'Processing request')
    # Your business logic

Method Mapping

DatadogLogWard
logger.debug(message, context)client.debug(service, message, metadata)
logger.info(message, context)client.info(service, message, metadata)
logger.warn(message, context)client.warn(service, message, metadata)
logger.error(message, context)client.error(service, message, metadata)
N/Aclient.critical(service, message, metadata)

Step 4: Alert Migration

Convert your Datadog monitors to LogWard alert rules. Here's how the formats map:

Datadog Monitor
json
{
  "name": "High Error Rate",
  "type": "log alert",
  "query": "status:error service:api",
  "message": "Error rate exceeded",
  "options": {
    "thresholds": { "critical": 100 },
    "evaluation_delay": 60
  }
}
LogWard Alert Rule
json
{
  "name": "High Error Rate",
  "enabled": true,
  "service": "api",
  "level": ["error"],
  "threshold": 100,
  "timeWindow": 5,
  "emailRecipients": ["team@example.com"],
  "webhookUrl": "https://hooks.slack.com/..."
}

Create alert rules via the LogWard API:

bash
curl -X POST "http://logward.internal:8080/api/v1/alerts" \
  -H "Authorization: Bearer YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "organizationId": "your-org-id",
    "projectId": "your-project-id",
    "name": "High Error Rate",
    "enabled": true,
    "service": "api",
    "level": ["error"],
    "threshold": 100,
    "timeWindow": 5,
    "emailRecipients": ["team@example.com"]
  }'

Step 5: Parallel Ingestion (Validation)

Run both platforms in parallel for 24-48 hours to validate data consistency before cutover.

Dual Ingestion Example
typescript
import { datadogLogs } from '@datadog/browser-logs';
import { LogWardClient } from '@logward-dev/sdk-node';

// Initialize both
datadogLogs.init({ clientToken: 'xxx', site: 'datadoghq.com' });
const logward = new LogWardClient({
  apiUrl: 'http://logward.internal:8080',
  apiKey: 'lp_xxx'
});

// Wrapper to send to both
function log(level: string, service: string, message: string, meta?: object) {
  // Send to Datadog
  datadogLogs.logger[level](message, { service, ...meta });

  // Send to LogWard
  logward[level](service, message, meta);
}

// Usage
log('info', 'api', 'Request processed', { userId: 123 });

Validation Checklist

  • Compare log counts in both platforms (should match within 1%)
  • Verify search results return the same logs
  • Test alert triggers on both platforms
  • Confirm notification delivery (email, Slack, webhooks)

Step 6: Cutover & Cleanup

Once validated, complete the migration:

  1. 1 Update production configs to use LogWard SDK only (remove Datadog SDK)
  2. 2 Remove Datadog Agent from all hosts (if using infrastructure monitoring, consider alternatives)
  3. 3 Update team runbooks and documentation to reference LogWard URLs
  4. 4 Cancel Datadog subscription after retention period expires

Concept Mapping

Datadog TermLogWard EquivalentNotes
OrganizationOrganization1:1 mapping
IndexProjectLogs are scoped to projects
ServiceService1:1 mapping
Log PipelineN/A (automatic)LogWard auto-parses JSON
MonitorAlert RuleSimilar functionality
DashboardSIEM DashboardSecurity-focused dashboards
API KeyAPI Key (per project)Prefix: lp_
Cloud SIEMSigma Rules + IncidentsIncluded at no extra cost

Common Issues

Logs not appearing in LogWard
  • Verify API key is valid and has write permissions
  • Check API URL is accessible from your application
  • Ensure Content-Type header is application/json
  • Check rate limits (default: 200 req/min per API key)
Timestamp mismatch
LogWard expects ISO 8601 timestamps in UTC. Datadog may use Unix epoch. Ensure your SDK is sending ISO format: 2025-01-15T12:00:00Z
Missing service name
In Datadog, service is often auto-detected. In LogWard, you must explicitly pass the service name as the first argument: client.info('my-service', 'message')

Next Steps