Signal Lab started as a Slack thread. One of our backend engineers wanted a quick way to generate synthetic APM traffic to validate a Datadog dashboard she was building. She wrote a handful of Next.js API routes that triggered spans, emitted structured logs, and deliberately threw errors. Within a week, half the engineering team was using it.
The Problem We Were Solving
Our customers build marketing automation workflows that depend on our platform being observable. When something goes wrong in a campaign — a segment activation delay, an attribution discrepancy, an integration failure — they need to be able to see exactly what happened. But most of our customers had never configured APM or log correlation before. They needed a playground where they could learn how traces, logs, and RUM events connect without risking their production data.
What Signal Lab Does
Every button in Signal Lab triggers a real API call to a real backend route. Those routes create named OpenTelemetry spans, emit structured JSON logs that flow to Datadog via the Vercel log drain, and return trace IDs that the browser's RUM SDK can correlate with backend traces. When you click "Trigger slow query", you're not simulating a trace — you're creating one.
Observable by Default
The deeper lesson from building Signal Lab is that observability works best when it's not an afterthought. Every API route in our platform follows the same pattern: a named span wrapping the business logic, structured logs with trace correlation IDs, and error status propagated up the span tree. Signal Lab is just that pattern made visible. If you can explain what your telemetry looks like when things go right, you can find it faster when things go wrong.