Home / Setup Guide
Next.js + Vercel + Datadog
Full observability in one afternoon. Pick the path that matches your Vercel plan, drop in the snippets, deploy.
Overview
Three signals. One service map. Same instrumentation.ts regardless of which delivery path you choose — the path only changes which env vars you set and whether you configure a Vercel drain.
APM
Distributed traces — server spans, edge spans, error tracking
RUM
Browser sessions, performance vitals, session replay
Logs
Structured JSON server logs correlated to traces
Also covered: source maps (browser + server) and Error Tracking span attributes.
Signal flow diagram
Prerequisites
- —Datadog account with APM, RUM, and Logs enabled
- —Next.js 14+ app deployed to Vercel
- —Node.js 18+
- —Vercel Pro or Enterprise if using Path A or B — log drains require a paid plan
Choose Your Path
All three paths use identical application code. They differ only in how traces and logs reach Datadog.
Datadog Integration
Vercel Pro / Enterprise
- ✓One-click marketplace install
- ✓Enable Traces (beta) in integration settings
- ✓Log drain included
- ✓VERCEL_OTEL_ENDPOINTS injected automatically
- ✗Traces (beta) must be explicitly enabled
- ✗Vercel egress fees
Manual Drain
Vercel Pro / Enterprise
- ✓No env vars needed for trace routing
- ✓Simple dd-api-key drain auth
- ✓Works alongside other integrations
- ✗Manual drain setup
- ✗Vercel egress fees
Direct OTLP
Hobby / Pro / Enterprise
- ✓Works on Hobby plan
- ✓No Vercel egress fees
- ✓Env-var-only config
- ✗No auto log drain
- ✗Log forwarding is manual
Path A — Datadog Integration (Marketplace)
VERCEL_OTEL_ENDPOINTS at runtime. Also enable the log drain.instrumentation.ts — see the APM section.localhost:4318 inside every serverless function. @vercel/otel sends spans there by default. Vercel's platform then forwards those spans to all configured trace drains — including native integrations (Datadog, Sentry, etc.) without requiring users to set any new environment variables. The sidecar only handles /v1/traces. Logs are a separate concern: Vercel captures them natively at the platform level, and the Datadog integration lets you choose which log types to forward (runtime, build, firewall, etc.) from within the integration settings.Path B — Manual Vercel Drain
Use when you want explicit control over drain configuration without installing a native integration. No OTEL_EXPORTER_OTLP_* env vars needed; Vercel routes sidecar spans to the drain automatically.
Datadog OTLP Traces), and set the URL destination to https://vercel.integrations.otlp.datadoghq.com/v1/traces (adjust the hostname for your Datadog site, e.g. vercel.integrations.otlp.us3.datadoghq.com for US3).dd-api-key: <your-key>. The vercel.integrations.otlp.* endpoint does not require dd-otlp-source.console.log() output reaches Datadog Log Management.instrumentation.ts — see the APM section. Add RUM — see the RUM section.Path C — Direct OTLP
No drain required. Setting OTEL_EXPORTER_OTLP_ENDPOINT causes @vercel/otel to bypass the sidecar entirely and send spans directly to Datadog's OTLP intake. Works on any Vercel plan including Hobby.
@vercel/otel reads these automatically:OTEL_EXPORTER_OTLP_ENDPOINT=https://vercel.integrations.otlp.datadoghq.com
OTEL_EXPORTER_OTLP_HEADERS=dd-api-key=<your-dd-api-key>
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobufvercel.integrations.otlp.us3.datadoghq.com for US3). No dd-otlp-source header required with this endpoint.instrumentation.ts — see the APM section. The same file works for all three paths.Option 1 — OTLP logs
Datadog provides a dedicated OTLP logs intake endpoint. Configure it with logs-specific env vars (separate from the traces endpoint):
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=https://vercel.integrations.otlp.datadoghq.com/v1/logs
OTEL_EXPORTER_OTLP_LOGS_HEADERS=dd-api-key=<your-dd-api-key>
OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/protobufThen emit logs via @opentelemetry/api-logs:
import { logs, SeverityNumber } from '@opentelemetry/api-logs'
logs.getLogger('my-service').emit({
severityNumber: SeverityNumber.INFO,
severityText: 'INFO',
body: 'user.signed_up',
attributes: { 'user.id': '123', 'plan': 'pro' },
})Option 2 — Datadog Logs HTTP API
POST directly to https://http-intake.logs.datadoghq.com/api/v2/logs with a DD-API-KEY header. Useful for sending logs from outside the OTel instrumentation path (e.g. a background job or edge function). See the Logs HTTP API docs.
RUM
Create a client component and render it once in your root layout. It initializes both RUM and Browser Logs.
npm install @datadog/browser-rum @datadog/browser-logs'use client'
import { datadogRum } from '@datadog/browser-rum'
import { datadogLogs } from '@datadog/browser-logs'
datadogRum.init({
applicationId: process.env.NEXT_PUBLIC_DD_APPLICATION_ID!,
clientToken: process.env.NEXT_PUBLIC_DD_CLIENT_TOKEN!,
site: process.env.NEXT_PUBLIC_DD_SITE ?? 'datadoghq.com',
service: 'my-app-web',
env: process.env.NEXT_PUBLIC_VERCEL_ENV ?? 'local',
version: process.env.NEXT_PUBLIC_VERCEL_GIT_COMMIT_SHA?.slice(0, 7) ?? 'local',
sessionSampleRate: 100,
sessionReplaySampleRate: 100,
trackResources: true,
trackUserInteractions: true,
// Injects traceparent header on matched requests → connects browser session to server span.
// Accepts strings (prefix match), RegExp, or predicate functions — mix as needed.
allowedTracingUrls: [
// Same-origin API calls (covers localhost, preview, and production automatically)
(url) => url.startsWith(window.location.origin),
// Example: explicit production domain
// 'https://my-app.vercel.app',
// Example: all subdomains via regex
// /https://.*.my-domain.com/,
// Example: specific API path prefix
// (url) => new URL(url).pathname.startsWith('/api/'),
],
})
datadogLogs.init({
clientToken: process.env.NEXT_PUBLIC_DD_CLIENT_TOKEN!,
site: process.env.NEXT_PUBLIC_DD_SITE ?? 'datadoghq.com',
service: 'my-app-web',
forwardErrorsToLogs: true,
sessionSampleRate: 100,
})
export default function DatadogInit() { return null }import DatadogInit from '@/app/components/datadog-init'
export default function RootLayout({ children }) {
return (
<html>
<body>
<DatadogInit />
{children}
</body>
</html>
)
}traceparent header on fetch requests to your own origin. This links the browser RUM session to the server-side APM trace — enabling end-to-end waterfall views in Datadog.APM / Traces
The same file works for all three paths. The path selection determines which env vars you set — the code is identical.
npm install @vercel/otel @opentelemetry/api @opentelemetry/api-logsimport { registerOTel } from '@vercel/otel'
export function register() {
registerOTel({
serviceName: process.env.VERCEL_PROJECT_NAME ?? 'my-service',
attributes: {
'deployment.environment': process.env.VERCEL_ENV ?? 'local',
'service.version': process.env.VERCEL_GIT_COMMIT_SHA?.slice(0, 7) ?? 'local',
'cloud.provider': 'vercel',
},
})
}instrumentation.ts must live at the Next.js project root next to package.json, not inside app/. Next.js only auto-calls register() from the root. Placing it in app/ silently does nothing — all spans will be no-ops.NEXT_OTEL_FETCH_DISABLED=1 at runtime. propagateContextUrls in the instrument config has no effect — Vercel controls fetch propagation.Error Tracking on spans
For errors to appear in Datadog Error Tracking, the error span must be on a SpanKind.SERVER entry span and include all three attributes:
import { SpanStatusCode } from '@opentelemetry/api'
function recordSpanError(span: Span, err: Error) {
span.recordException(err)
span.setStatus({ code: SpanStatusCode.ERROR, message: err.message })
// All three required for Datadog Error Tracking
span.setAttribute('error.type', err.name)
span.setAttribute('error.message', err.message)
span.setAttribute('error.stack', err.stack ?? '')
}Logs
Datadog Log Management correlates logs to traces via dd.trace_id and dd.span_id — both must be 64-bit decimal integers, not hex.
import { trace } from '@opentelemetry/api'
// OTel trace IDs are 128-bit hex. Datadog wants the lower 64 bits as decimal.
function hexToDecimal(hex: string): string {
return hex ? BigInt(`0x${hex}`).toString(10) : ''
}
export function log(
level: 'info' | 'warn' | 'error',
event: string,
data: Record<string, unknown> = {},
) {
const ctx = trace.getActiveSpan()?.spanContext()
const entry = {
timestamp: new Date().toISOString(),
level,
event,
'dd.trace_id': hexToDecimal((ctx?.traceId ?? '').slice(-16)),
'dd.span_id': hexToDecimal(ctx?.spanId ?? ''),
...data,
}
if (level === 'error') console.error(JSON.stringify(entry))
else console.log(JSON.stringify(entry))
}traceId.slice(-16) takes the lower 16 hex characters (64 bits) of the 128-bit OTel trace ID, matching what Datadog stores. Without this conversion, log correlation will silently fail.Source Maps
Enables unminified stack traces in Datadog RUM and APM. Maps are uploaded at build time then deleted so they're never served publicly.
const nextConfig: NextConfig = {
// Required: generates browser .map files for upload
productionBrowserSourceMaps: true,
// Bake git metadata into the bundle for source code integration
env: {
DD_GIT_REPOSITORY_URL: `https://github.com/${process.env.VERCEL_GIT_REPO_OWNER}/${process.env.VERCEL_GIT_REPO_SLUG}`,
DD_GIT_COMMIT_SHA: process.env.VERCEL_GIT_COMMIT_SHA ?? '',
},
// Required in Next.js 16+ when webpack config is present
turbopack: {},
}"build": "next build",
"postbuild": "node scripts/upload-sourcemaps.mjs"// Browser maps
await exec(`npx @datadog/datadog-ci sourcemaps upload .next/static
--service=my-service-web
--release-version=${sha}
--minified-path-prefix=/_next/static`)
// Server maps — generated by both Turbopack and webpack
// Script checks for .map files first and skips silently if none found
await exec(`npx @datadog/datadog-ci sourcemaps upload .next/server
--service=my-service
--release-version=${sha}
--minified-path-prefix=/var/task/.next/server`).map files — server code is minified with single-letter variable names. The upload script auto-detects and uploads them. If you switched to webpack via --webpack, add devtool: 'hidden-source-map' to next.config.ts to generate server maps there too.Env Variables
Set in Vercel project settings. Variables without NEXT_PUBLIC_ are server-only and never sent to the browser.
| Variable | Required | Notes |
|---|---|---|
| NEXT_PUBLIC_DD_APPLICATION_ID | Yes | RUM application ID from Datadog → RUM & Session Replay → Application |
| NEXT_PUBLIC_DD_CLIENT_TOKEN | Yes | RUM / Browser Logs client token from Datadog |
| NEXT_PUBLIC_DD_SITE | No | Defaults to datadoghq.com. Change for EU/US3/US5. |
| DATADOG_API_KEY | Yes (sourcemaps) | Server-only. Used by postbuild sourcemap upload script. |
| OTEL_EXPORTER_OTLP_ENDPOINT | Paths B & C | e.g. https://otlp.datadoghq.com/v1/traces — sets the direct OTLP exporter target. |
| OTEL_EXPORTER_OTLP_HEADERS | Paths B & C | e.g. dd-api-key=<key> (no dd-otlp-source needed with vercel.integrations.otlp.* endpoint) |
| SANITY_API_WRITE_TOKEN | Lead forms | Sanity Editor-role token. Required for /api/forms/lead to write submissions. |
Advanced Configuration
registerOTel accepts several optional properties beyond serviceName and attributes. These are useful when you need to push additional signals or hook in custom instrumentation libraries.
instrumentations
Registers OpenTelemetry instrumentation libraries via registerInstrumentations(). Accepts an array of Instrumentation instances, or the strings "auto" / "fetch".
When instrumentations is omitted, "auto" is used by default, which enables FetchInstrumentation. If you supply the array yourself, "auto" is not added automatically — include it explicitly if you still want fetch tracing.
import { registerOTel } from '@vercel/otel'
import { RuntimeNodeInstrumentation } from '@opentelemetry/instrumentation-runtime-node'
export function register() {
registerOTel({
serviceName: 'my-service',
instrumentations: [
'auto', // keep default FetchInstrumentation
new RuntimeNodeInstrumentation(), // add Node.js runtime metrics
],
})
}NEXT_OTEL_FETCH_DISABLED=1 at runtime, so FetchInstrumentation / "auto" has no effect on deployed functions — fetch propagation is controlled by the platform. It may still be useful in local development.logRecordProcessors
Initializes OTel's logging pipeline. When processors are provided, @vercel/otel creates a LoggerProvider and registers it globally via logs.setGlobalLoggerProvider(). Without this, calls to logs.getLogger(...).emit(...) from @opentelemetry/api-logs are silent no-ops.
Use this when you want to ship structured logs directly to Datadog's OTLP logs intake rather than relying on a Vercel log drain (Path C, Option 1).
import { registerOTel } from '@vercel/otel'
import { SimpleLogRecordProcessor } from '@opentelemetry/sdk-logs'
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http'
export function register() {
registerOTel({
serviceName: 'my-service',
logRecordProcessors: [
new SimpleLogRecordProcessor(
new OTLPLogExporter({
url: 'https://vercel.integrations.otlp.datadoghq.com/v1/logs',
headers: {
'dd-api-key': process.env.DATADOG_API_KEY!,
},
}),
),
],
})
}SimpleLogRecordProcessor exports synchronously on each record — suitable for serverless where the process exits after each request. BatchLogRecordProcessor is more efficient for long-running servers but risks dropping records on cold-start exits.metricReaders
Attaches MetricReader instances to the SDK's MeterProvider. Readers control how and when metric data is collected and exported. Use this alongside instrumentations when collecting Node.js runtime metrics (heap, CPU, GC, event loop).
import { registerOTel } from '@vercel/otel'
import { PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics'
import { OTLPMetricExporter, AggregationTemporalityPreference } from '@opentelemetry/exporter-metrics-otlp-http'
import { RuntimeNodeInstrumentation } from '@opentelemetry/instrumentation-runtime-node'
export function register() {
const metricReader = new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter({
url: 'https://otlp.datadoghq.com/v1/metrics',
headers: { 'dd-api-key': process.env.DATADOG_API_KEY! },
// Datadog rejects cumulative sums — delta required
temporalityPreference: AggregationTemporalityPreference.DELTA,
}),
exportIntervalMillis: 30_000,
})
registerOTel({
serviceName: 'my-service',
metricReaders: [metricReader],
instrumentations: [new RuntimeNodeInstrumentation()],
// Drop histograms — Datadog's OTLP intake rejects them
views: [{ aggregation: { type: AggregationType.DROP }, instrumentType: InstrumentType.HISTOGRAM }],
})
}PeriodicExportingMetricReader pushes on a fixed interval (30 s above). On Vercel serverless functions, cold-start invocations may exit before the first flush — metrics accumulate meaningfully only on warm, long-lived instances.DATADOG_API_KEY must be set server-side; if absent, skip the reader entirely to avoid errors.