Migrate from Datadog
Migrate from Datadog's proprietary platform to LogWard and save up to 90% on log management costs while gaining full data ownership and built-in SIEM capabilities.
Why Migrate from Datadog?
Massive Cost Savings
Datadog charges $0.10-$1.70/GB for log ingestion. A 500 GB/day deployment can cost $15,000+/month. LogWard is self-hosted with zero per-GB fees.
Full Data Ownership
Your logs never leave your infrastructure. No data sent to third parties. Full GDPR compliance with EU data sovereignty.
Built-in SIEM
Sigma detection rules, threat detection, and incident management included. Datadog Cloud SIEM costs extra ($0.20/GB on top of log costs).
Unlimited Users
No per-seat licensing. Add your entire team without worrying about per-user costs or role-based pricing tiers.
Feature Comparison
| Feature | Datadog | LogWard |
|---|---|---|
| Log Ingestion (HTTP API) | Yes | Yes |
| SDKs (Node.js, Python, etc.) | Yes | Yes |
| OpenTelemetry Support | Partial (Logs only) | Native OTLP |
| Full-text Search | Yes | Yes |
| Real-time Streaming | Yes | SSE |
| Alert Rules | Yes | Yes |
| Email/Webhook Notifications | Yes | Yes |
| Trace Correlation | Yes | Yes |
| Sigma Rules (SIEM) | No | Built-in |
| Incident Management | Cloud SIEM ($0.20/GB) | Included |
| MITRE ATT&CK Mapping | Cloud SIEM | Included |
| Self-hosted Option | No | Yes |
| Pricing | $0.10-$1.70/GB + per-user | Infrastructure only |
Step 1: Inventory Your Datadog Setup
Before migrating, document your existing Datadog configuration:
What to Document
- Log sources: List all services/hosts sending logs to Datadog
- Log volume: Check your usage dashboard for average GB/day
- Active monitors: Export all log-based monitors via API
- Dashboards: Screenshot or export critical dashboards
- Log pipelines: Document parsing rules and processors
Export your Datadog configuration using the API:
# Export all log monitors
curl -X GET "https://api.datadoghq.com/api/v1/monitor" \
-H "DD-API-KEY: ${DD_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DD_APP_KEY}" > monitors.json
# Export all dashboards
curl -X GET "https://api.datadoghq.com/api/v1/dashboard" \
-H "DD-API-KEY: ${DD_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DD_APP_KEY}" > dashboards.jsonStep 2: Deploy LogWard
Follow the Deployment Guide to set up LogWard. Here's a quick start:
# Clone and configure
git clone https://github.com/logward-dev/logward.git
cd logward/docker
# Copy and edit environment
cp .env.example .env
# Edit .env: Set PUBLIC_API_URL, database passwords, etc.
# Start LogWard
docker compose up -d
# Verify deployment
curl http://localhost:8080/healthRecommended Specs (for 500 GB/day)
After deployment, create your organization and project via the UI at http://localhost:3000,
then generate an API key from the project settings.
Step 3: SDK Migration
Replace Datadog SDK with LogWard SDK. The API is similar, making migration straightforward.
Node.js Migration
import { datadogLogs } from '@datadog/browser-logs';
datadogLogs.init({
clientToken: 'pub_xxx',
site: 'datadoghq.com',
service: 'my-app',
env: 'production',
});
datadogLogs.logger.info('User logged in', {
userId: 123,
email: 'user@example.com'
});import { LogWardClient } from '@logward-dev/sdk-node';
const client = new LogWardClient({
apiUrl: 'http://logward.internal:8080',
apiKey: 'lp_xxx',
globalMetadata: { env: 'production' }
});
client.info('my-app', 'User logged in', {
userId: 123,
email: 'user@example.com'
});Python Migration
from datadog import initialize, statsd
from ddtrace import tracer
initialize(api_key='xxx', app_key='yyy')
@tracer.wrap()
def process_request():
statsd.increment('requests')
# Datadog auto-instruments logsfrom logward_sdk import LogWardClient
client = LogWardClient(
api_url='http://logward.internal:8080',
api_key='lp_xxx',
global_metadata={'env': 'production'}
)
def process_request():
client.info('api', 'Processing request')
# Your business logicMethod Mapping
| Datadog | LogWard |
|---|---|
| logger.debug(message, context) | client.debug(service, message, metadata) |
| logger.info(message, context) | client.info(service, message, metadata) |
| logger.warn(message, context) | client.warn(service, message, metadata) |
| logger.error(message, context) | client.error(service, message, metadata) |
| N/A | client.critical(service, message, metadata) |
Step 4: Alert Migration
Convert your Datadog monitors to LogWard alert rules. Here's how the formats map:
{
"name": "High Error Rate",
"type": "log alert",
"query": "status:error service:api",
"message": "Error rate exceeded",
"options": {
"thresholds": { "critical": 100 },
"evaluation_delay": 60
}
}{
"name": "High Error Rate",
"enabled": true,
"service": "api",
"level": ["error"],
"threshold": 100,
"timeWindow": 5,
"emailRecipients": ["team@example.com"],
"webhookUrl": "https://hooks.slack.com/..."
}Create alert rules via the LogWard API:
curl -X POST "http://logward.internal:8080/api/v1/alerts" \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"organizationId": "your-org-id",
"projectId": "your-project-id",
"name": "High Error Rate",
"enabled": true,
"service": "api",
"level": ["error"],
"threshold": 100,
"timeWindow": 5,
"emailRecipients": ["team@example.com"]
}'Step 5: Parallel Ingestion (Validation)
Run both platforms in parallel for 24-48 hours to validate data consistency before cutover.
import { datadogLogs } from '@datadog/browser-logs';
import { LogWardClient } from '@logward-dev/sdk-node';
// Initialize both
datadogLogs.init({ clientToken: 'xxx', site: 'datadoghq.com' });
const logward = new LogWardClient({
apiUrl: 'http://logward.internal:8080',
apiKey: 'lp_xxx'
});
// Wrapper to send to both
function log(level: string, service: string, message: string, meta?: object) {
// Send to Datadog
datadogLogs.logger[level](message, { service, ...meta });
// Send to LogWard
logward[level](service, message, meta);
}
// Usage
log('info', 'api', 'Request processed', { userId: 123 });Validation Checklist
- Compare log counts in both platforms (should match within 1%)
- Verify search results return the same logs
- Test alert triggers on both platforms
- Confirm notification delivery (email, Slack, webhooks)
Step 6: Cutover & Cleanup
Once validated, complete the migration:
- 1 Update production configs to use LogWard SDK only (remove Datadog SDK)
- 2 Remove Datadog Agent from all hosts (if using infrastructure monitoring, consider alternatives)
- 3 Update team runbooks and documentation to reference LogWard URLs
- 4 Cancel Datadog subscription after retention period expires
Concept Mapping
| Datadog Term | LogWard Equivalent | Notes |
|---|---|---|
| Organization | Organization | 1:1 mapping |
| Index | Project | Logs are scoped to projects |
| Service | Service | 1:1 mapping |
| Log Pipeline | N/A (automatic) | LogWard auto-parses JSON |
| Monitor | Alert Rule | Similar functionality |
| Dashboard | SIEM Dashboard | Security-focused dashboards |
| API Key | API Key (per project) | Prefix: lp_ |
| Cloud SIEM | Sigma Rules + Incidents | Included at no extra cost |
Common Issues
- Verify API key is valid and has write permissions
- Check API URL is accessible from your application
- Ensure Content-Type header is
application/json - Check rate limits (default: 200 req/min per API key)
2025-01-15T12:00:00Zclient.info('my-service', 'message')