Technology

Tracing That Feels Like a Conversation With Your App

{
“title”: “OpenTelemetry Made Practical: End‑to‑End Tracing for PHP/Laravel and Node.js with Jaeger and Tempo”,
“content”: “

It started on a Tuesday afternoon, right when the team swore everything “looked good in logs.” The site wasn’t down, exactly—just excruciatingly slow for a handful of requests. Our Laravel app was fine on its own, the Node.js worker insisted it was innocent, and the external API was apparently returning in record time. Classic he-said-she-said between services. Ever had that moment when you stare at a dashboard and feel like the real story is happening between the charts? That was me, coffee in one hand and a sinking feeling in the other.

That was the week I stopped guessing and wired up OpenTelemetry end to end. Suddenly, the whole request path lit up like an airport runway. You could watch the HTTP call leave Laravel, see it hop into the Node.js service, hit the database, and finally bounce back. The slow part wasn’t where anyone expected. And the fix? A tiny change in how we pooled connections and a tighter timeout policy on one span. Today, I want to show you exactly how to get that X-ray vision—practically, with just enough code to run tomorrow morning. We’ll instrument PHP/Laravel and Node.js, add the OpenTelemetry Collector, and send traces to Jaeger or Tempo so you can actually see what’s going on.

Here’s the thing: logs tell you “what,” metrics tell you “how many,” but traces tell you “where it hurts.” Tracing is just following a single request as it journeys through your stack—front door to back office and home again. Each step is a span, and spans nest and chain together into a trace. You get timing, relationships, and enough context to know if a slow query was the root cause or just collateral damage.

OpenTelemetry is the universal translator in this story. It’s vendor-neutral, works across languages, and speaks the same headers your PHP and Node.js services can both understand. Instead of picking a different agent for each runtime, you use one common way to create spans, propagate context, and export data. It’s like switching your team to one group chat instead of juggling six different apps that don’t talk to each other.

And because OpenTelemetry is just about collecting signals, you can choose where to send them. Jaeger and Tempo are two friendly places for traces to live. Jaeger gives you a powerful UI for searching and analyzing. Tempo plays beautifully with Grafana, sliding your traces right next to dashboards and logs. You don’t need to choose one forever—just pick what makes sense now.

A Simple Architecture That Works in Real Life

Let’s keep the architecture boring in the best way. Your apps (Laravel and Node.js) use OpenTelemetry SDKs to produce spans. Those spans get shipped to an OpenTelemetry Collector. The Collector is the traffic controller: it receives spans, batches them, optionally transforms them, and forwards them to Jaeger or Tempo. Think of it as a hub that decouples what apps send from where data ends up. Swap destinations anytime without redeploying your code.

Two small but mighty ideas make the magic happen. First, context propagation: a small set of headers (like traceparent) that ride along with every request so the trace stays stitched across services. Second, resource attributes: global tags like service.name, deployment.environment, and service.version that make traces searchable and meaningful. Name things well and your future self will thank you during a late-night incident.

If you want a deeper dive later, the official OpenTelemetry docs are a solid companion. But for now, let’s wire this up.

Laravel: Add Traces Without Breaking Your Flow

Install the SDK and set the stage

In my experience, the most painless path is to start with the PHP OpenTelemetry SDK and export via OTLP to the Collector. I like to enable tracing early in Laravel’s bootstrap so everything—from routing to controllers—can be captured. Versions and package names can evolve, but you’ll look for the core OpenTelemetry PHP packages plus an OTLP exporter.

# In your Laravel project
composer require open-telemetry/opentelemetry
# If your setup separates exporters/extensions, you may add (names can evolve):
# composer require open-telemetry/exporter-otlp

Now, initialize a tracer provider early. One practical spot is bootstrap/app.php or a dedicated service provider that runs on boot. The idea is to set the global tracer provider and point it to the Collector via OTLP.

<?php
// app/Providers/TelemetryServiceProvider.php (example)
namespace App\Providers;

use Illuminate\Support\ServiceProvider;
use OpenTelemetry\API\Globals;
use OpenTelemetry\API\Trace\TracerProviderInterface;
use OpenTelemetry\SDK\Trace\TracerProvider;
use OpenTelemetry\SDK\Trace\SpanProcessor\BatchSpanProcessor;
use OpenTelemetry\SDK\Trace\Sampler\ParentBased;
use OpenTelemetry\SDK\Trace\Sampler\TraceIdRatioBased;
use OpenTelemetry\SDK\Resource\ResourceInfo;
use OpenTelemetry\SDK\Common\AttributeLimits;
use OpenTelemetry\SemConv\ResourceAttributes;
use OpenTelemetry\Contrib\Otlp\Exporter as OtlpExporter; // Depending on your install path

class TelemetryServiceProvider extends ServiceProvider
{
    public function register(): void
    {
        // No-op
    }

    public function boot(): void
    {
        $serviceName = env('OTEL_SERVICE_NAME', 'laravel-app');
        $environment = env('OTEL_ENV', 'development');
        $version     = env('OTEL_SERVICE_VERSION', '1.0.0');

        $resource = ResourceInfo::create(
            new AttributeLimits(),
            [
                ResourceAttributes::SERVICE_NAME => $serviceName,
                ResourceAttributes::DEPLOYMENT_ENVIRONMENT => $environment,
                ResourceAttributes::SERVICE_VERSION => $version,
            ]
        );

        $exporter = new OtlpExporter(
            endpoint: env('OTEL_EXPORTER_OTLP_ENDPOINT', 'http://otel-collector:4318/v1/traces'),
            // Some SDKs use separate HTTP/gRPC config or headers; adjust to your version.
        );

        $sampler = new ParentBased(new TraceIdRatioBased((float) env('OTEL_TRACES_SAMPLER_ARG', 1.0)));
        $provider = new TracerProvider(
            new BatchSpanProcessor($exporter),
            $resource,
            $sampler
        );

        Globals::registerTracerProvider($provider);
    }
}

Don’t forget to register that provider in config/app.php and add environment variables to .env:

OTEL_SERVICE_NAME=laravel-app
OTEL_SERVICE_VERSION=1.2.3
OTEL_ENV=production
OTEL_TRACES_SAMPLER_ARG=0.2
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318/v1/traces

Make your spans helpful, not noisy

Automatic instrumentation can capture HTTP server spans and common frameworks, but you still want a few manual spans around business logic that matters. In a controller or service class, wrap the fragile or expensive bit. Keep names human-friendly; future you will skim them at 3 a.m.

<?php
use OpenTelemetry\API\Globals;

$tracer = Globals::tracerProvider()->getTracer('app');

// Somewhere in a controller action:
$span = $tracer->spanBuilder('Checkout: calculate cart totals')->startSpan();
try {
    // Your business logic...
    $total = $cartService->calculateTotals($cartId);
    $span->setAttribute('cart.items.count', count($cartService->items($cartId)));
    $span->setAttribute('currency', 'USD');
} catch (\Throwable $e) {
    $span->recordException($e);
    $span->setStatus(\OpenTelemetry\SDK\Common\Attribute\StatusCode::STATUS_ERROR);
    throw $e;
} finally {
    $span->end();
}

Propagate context in outbound calls

When Laravel calls another service—maybe your Node.js worker—pass the trace context along. Most HTTP clients can be wrapped so the traceparent header rides shotgun. If you’re using Guzzle, either use contributed instrumentation or add the header manually by reading the current context.

<?php
use GuzzleHttp\Client;
use OpenTelemetry\API\Trace\Propagation\TraceContextPropagator;
use OpenTelemetry\Context\Context;

$client = new Client([
    'base_uri' => env('WORKER_BASE_URL', 'http://node-worker:3000')
]);

$headers = [];
TraceContextPropagator::getInstance()->inject(
    Context::getCurrent(),
    $headers,
    // injector callback:
    fn(array &$carrier, string $key, string $value) => $carrier[$key] = $value
);

$response = $client->request('POST', '/process', [
    'headers' => $headers,
    'json' => [
        'order_id' => $orderId,
        'user_id'  => $userId,
    ],
    'timeout' => 2.0,
]);

Correlate logs: show the trace id in every line

One small change that pays off forever: add trace_id to every log. It’s like leaving breadcrumbs from the log to the trace. In Laravel, you can add a Monolog processor that reads the current span context and appends IDs.

<?php
// app/Logging/TraceContextProcessor.php
namespace App\Logging;

use OpenTelemetry\API\Trace\Span;
use Monolog\Processor\ProcessorInterface;

class TraceContextProcessor implements ProcessorInterface
{
    public function __invoke(array $record): array
    {
        $spanContext = Span::getCurrent()->getContext();
        if ($spanContext && $spanContext->isValid()) {
            $record['extra']['trace_id'] = $spanContext->getTraceId();
            $record['extra']['span_id']  = $spanContext->getSpanId();
        }
        return $record;
    }
}

Register that processor in config/logging.php and you’ll be able to jump from a log line to the exact trace in Jaeger or Tempo. If you also centralize logs, you can wire them together in Grafana. I’ve shown the pattern in my Loki + Promtail + Grafana playbook for centralized logging.

Node.js: Express, Workers, and a Tracer That Doesn’t Get in Your Way

Install and initialize

On the Node.js side, I like using the all-in-one Node SDK with auto-instrumentations. It covers HTTP, Express, common databases, and queue libraries. Then I add a couple of manual spans where the business logic deserves a name.

# In your Node.js service
npm i @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node \
      @opentelemetry/exporter-trace-otlp-http @opentelemetry/semantic-conventions

Create a small tracing.js (or tracing.ts) that runs before your app. A common pattern is to import it at the top of your server.js, or use node -r ./tracing.js server.js so it boots first.

// tracing.js
'use strict';
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');

const exporter = new OTLPTraceExporter({
  url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://otel-collector:4318/v1/traces',
  headers: {} // Add if your collector needs auth
});

const sdk = new NodeSDK({
  traceExporter: exporter,
  resource: new Resource({
    [SemanticResourceAttributes.SERVICE_NAME]: process.env.OTEL_SERVICE_NAME || 'node-worker',
    [SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]: process.env.OTEL_ENV || 'development',
    [SemanticResourceAttributes.SERVICE_VERSION]: process.env.OTEL_SERVICE_VERSION || '1.0.0',
  }),
  instrumentations: [getNodeAutoInstrumentations()],
  // Sampler can be set via env: OTEL_TRACES_SAMPLER=parentbased_traceidratio, OTEL_TRACES_SAMPLER_ARG=0.2
});

sdk.start()
  .then(() => console.log('OpenTelemetry initialized for Node.js'))
  .catch((err) => console.error('Error initializing OpenTelemetry', err));

process.on('SIGTERM', async () => {
  try {
    await sdk.shutdown();
    console.log('OpenTelemetry shutdown complete');
  } catch (err) {
    console.error('Error shutting down OpenTelemetry', err);
  } finally {
    process.exit(0);
  }
});

Environment variables keep things flexible:

OTEL_SERVICE_NAME=node-worker
OTEL_SERVICE_VERSION=2.4.0
OTEL_ENV=production
OTEL_TRACES_SAMPLER=parentbased_traceidratio
OTEL_TRACES_SAMPLER_ARG=0.2
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318/v1/traces

Manual spans for the parts that matter

Let’s say your Node.js service receives the request from Laravel, does a bit of CPU work, then writes to the database. Wrap the tricky bit with a named span and tag it with the interesting parts (but keep PII out).

// In an Express route handler
const { trace } = require('@opentelemetry/api');

app.post('/process', async (req, res) => {
  const tracer = trace.getTracer('worker');
  await tracer.startActiveSpan('Process order workflow', async (span) => {
    try {
      span.setAttribute('order.id', String(req.body.order_id));
      await slowStep();        // CPU or external call
      await writeToDb();       // DB write
      span.addEvent('order.processed');
      res.json({ ok: true });
    } catch (err) {
      span.recordException(err);
      span.setStatus({ code: 2, message: 'Error processing order' }); // STATUS_CODE_ERROR
      res.status(500).json({ ok: false });
    } finally {
      span.end();
    }
  });
});

Keep context when calling other services

If you call another internal service or a third-party API, make sure you forward the context headers (traceparent, and optionally tracestate). Most auto-instrumentations add this automatically for Node’s native HTTP, Axios, and friends. If you need to add it yourself, read from the current context and set headers before sending the request.

Correlate logs with trace ids

Whether you’re a Pino fan or team Winston, add a small hook so every log line carries the active trace_id and span_id. Then it’s trivial to jump from a log search to the exact trace in Jaeger or Tempo.

// With Pino
const pino = require('pino');
const { context, trace } = require('@opentelemetry/api');

const logger = pino();

function withTraceBindings(msg, extra = {}) {
  const span = trace.getSpan(context.active());
  if (span && span.spanContext()) {
    const ctx = span.spanContext();
    return logger.child({
      trace_id: ctx.traceId,
      span_id: ctx.spanId,
      ...extra
    }).info(msg);
  }
  return logger.info(msg);
}

// Usage in code:
withTraceBindings('Job started', { job: 'thumbnail' });

The Collector: Your Calm, Configurable Hub

I always reach for the OpenTelemetry Collector because it keeps your code simple and your future options open. Your apps send OTLP to the Collector. The Collector batches, retries, and exports to the backend of your choice. You can tweak sampling, rename attributes, and fan out to multiple destinations if you’re experimenting.

A minimal Collector config for Jaeger and Tempo

Use a single Collector that receives OTLP over HTTP and forwards to both Jaeger and Tempo. In a real environment, you might point to only one or split by environment. Here’s a compact example that you can adapt.

# otel-collector-config.yaml
receivers:
  otlp:
    protocols:
      http:
      grpc:

processors:
  batch:

exporters:
  jaeger:
    endpoint: jaeger:14250
    tls:
      insecure: true
  otlphttp/tempo:
    endpoint: http://tempo:4318
    # headers: { 'X-Scope-OrgID': 'tenant-1' } # If using multi-tenant Tempo
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [jaeger, otlphttp/tempo]

Run the Collector however you like—Docker, systemd, Kubernetes. Then point Laravel and Node at http://<collector>:4318/v1/traces (HTTP) or grpc://<collector>:4317 if you prefer gRPC. The difference isn’t philosophical; it’s just plumbing. Stick to what’s easiest in your environment.

Quick Jaeger and Tempo notes

Jaeger’s all-in-one image is handy for local dev; their getting started page walks through ports and UI. Tempo is a different kind of beast—schemaless, cheap storage, best friends with Grafana. The Grafana Tempo documentation shows how to add Tempo as a data source, then explore traces in Grafana alongside logs and metrics. Don’t overthink it. Start simple, get traces flowing, then decide your longer-term home.

Put It All Together: Click a Trace, Find the Bottleneck

Here’s what your first real trace will feel like. You’ll open Jaeger or Grafana, search by service.name set to your Laravel app, and click a trace that took longer than it should. The root span is the incoming HTTP request. Beneath it, you’ll see spans for your controller, outbound HTTP to Node, the Node handler, database calls, maybe a cache check. A single long bar will jump out. That’s your bottleneck.

One of my clients had a trace where the Node span looked slow, but the real delay was upstream—Laravel waited too long before calling Node because it was busy assembling a large payload in PHP. We shaved off time by moving that assembly to the Node side, closer to where the work belonged. Without the trace, everyone would’ve stared at the wrong service.

Another time, we saw a pattern: quick requests in the morning, then a gradual slowdown. The traces told us sampling wasn’t the issue; it was connection pooling during a traffic spike. We added a limit to concurrent calls, tightened timeouts, and the mountain flattened. You don’t need a committee to read traces. You just need to look for the chunk of time that doesn’t make sense and ask “why there?”

Practical Tips From the Field (AKA The Stuff I Wish I Knew First)

Start with high sampling in dev—100% is fine—so you can see everything while you’re wiring it up. In production, dial it down. I like a parent-based sampler with a modest ratio for steady traffic and a way to temporarily bump it during incidents. If you need to capture a particular tenant or endpoint more aggressively, the Collector can help with tail-based sampling in more advanced setups.

Keep your service names stable and meaningful. Include service.version so you can compare releases. Tag deployment.environment to quickly filter staging versus production. Use attributes sparingly and thoughtfully—don’t stuff full SQL queries into attributes or put PII where it doesn’t belong.

Propagation is the glue. By default, the SDKs use the W3C Trace Context, which is what you want. If a legacy service speaks B3 headers, you can add a propagator to translate, but try to converge on one standard. Also, make sure your reverse proxy doesn’t strip tracing headers. If you’re behind Nginx or a load balancer, let traceparent through.

Don’t ignore time sync. I know it sounds boring, but drift between hosts can make spans look weird. NTP is your quiet friend. The same goes for container restarts and hot reloads—remember to gracefully shut down SDKs so they flush spans on exit. It feels like overkill until the one span you needed never makes it upstream.

For logs, include trace_id and span_id everywhere you can. When you later stitch logs and traces in Grafana, it feels like turning on the lights in a dark room. If you’re curious how I set that up with Loki and Promtail, I walk through the approach in the logging playbook linked earlier.

Finally, be kind to your future self. Name manual spans after the business action, not the function name. “Apply voucher and recompute totals” tells a human what happened. “calcTotalsV2” will look like static when you’re tired.

Troubleshooting: When Nothing Shows Up (Or Shows Up Wrong)

If your traces don’t appear, start at the edges and work inward. From the app side, verify environment variables. If you’re using HTTP, can the app reach collector:4318? If you’re using Docker, is the network configured so services can see each other? Firewalls and SELinux can be surprisingly chatty about blocking traffic—listen to them.

Next, look at the Collector logs. Is it receiving? Are batches being sent? If Jaeger or Tempo are not showing the data, try sending only to one exporter to reduce moving parts. Keep the config small. When in doubt, switch to an all-in-one Jaeger in local dev to sanity check data flow.

If spans appear but don’t stitch across services, it’s a propagation issue. Make sure your outbound HTTP in Laravel injects the W3C headers and that Node extracts them (auto-instrumentations usually do). Check your ingress: proxies should not strip traceparent. Also, verify clocks—mismatched time can make child spans look older than parents, which is confusing and sometimes leads you to the wrong conclusion.

In PHP-FPM land, remember to reload after installing extensions or changing provider code. In Node.js, make sure your tracing.js is required before the rest of your app so auto-instrumentations can patch modules at import time. If you’re using cluster mode, ensure you initialize tracing in each worker.

A Tiny Docker Compose to Get You Moving

If you want something you can run today on a laptop or a throwaway VM, here is a small Compose file that includes the Collector, Jaeger, and Tempo. It’s not a production blueprint—it’s a playground where you can see traces moving.

version: '3.9'
services:
  otel-collector:
    image: otel/opentelemetry-collector:latest
    command: ['--config=/etc/otel-collector-config.yaml']
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - '4317:4317'   # gRPC
      - '4318:4318'   # HTTP

  jaeger:
    image: jaegertracing/all-in-one:latest
    ports:
      - '16686:16686' # UI
      - '14250:14250' # gRPC

  tempo:
    image: grafana/tempo:latest
    command: ['-config.file=/etc/tempo.yaml']
    volumes:
      - ./tempo.yaml:/etc/tempo.yaml
    ports:
      - '3200:3200'   # Tempo HTTP API
      - '4318:4318'   # OTLP HTTP (if you want to send directly to Tempo)

Point Laravel and Node to http://localhost:4318/v1/traces. Then open Jaeger at http://localhost:16686 and Tempo (most often via Grafana). If you’ve got Grafana handy, add Tempo as a data source and play with the trace explorer. It’s oddly satisfying to watch spans line up after wiring the parts together.

Security and Privacy: Keep Traces Useful and Safe

As you add attributes to spans, avoid PII like emails, card numbers, or anything you wouldn’t want in a support ticket. It’s fine to reference stable IDs that mean something to your team, like user.id or order.id, as long as they don’t point straight at sensitive information. Keep attribute cardinality reasonable—avoid unique or ever-growing values in attributes that you’ll use for filtering. Save the long stories for logs, and keep traces focused on timing and relationships.

It’s also smart to add a small review step in your PR process: “Are we recording anything sensitive in spans?” You’ll catch things early, and your compliance folks will sleep better. If you want to be extra tidy, add attribute processors in the Collector to drop or rename anything risky before it hits your backend.

When to Pick Jaeger or Tempo (And Why You Don’t Have to Marry Either)

I’ve used both in real projects, sometimes even side by side during a transition. Jaeger has a clean UI that makes it easy to search and analyze traces with no extra ceremony. Tempo shines when you’re already living in Grafana and want traces, logs, and metrics in one place. The good news is that OpenTelemetry doesn’t lock you in. If you start with one and later want the other, the Collector makes that decision a configuration change rather than a code rewrite.

For a quick local dev setup, Jaeger’s all-in-one is a fast win. For a stack that already loves Grafana dashboards and Loki logs, Tempo slips right in. You can’t really go wrong by starting, learning, and then refining.

Wrap-Up: The Calm After the Trace

If you’ve ever felt like your services were talking behind your back, tracing is your translator. With Laravel and Node wired to OpenTelemetry, and a Collector steering traffic to Jaeger or Tempo, your stack starts telling the truth in a way you can actually use. The first time you click a trace and see the slow step glowing on screen, you’ll wonder how you ever debugged without it.

My advice: keep it simple at first. Set the SDKs, send OTLP to the Collector, and view traces in one backend. Add manual spans to the parts of your code that make you nervous. Correlate logs with trace IDs so you can pivot between views without losing your place. As you grow more confident, tune sampling and add small guardrails for privacy. The whole point is clarity without drama.

Hope this was helpful. If you wire this up this week and uncover a sneaky bottleneck, I’d love to hear about it. See you in the next post—and may your slowest span be the one you already expect.

“,
“focus_keyword”: “OpenTelemetry end-to-end tracing”,
“meta_description”: “Practical OpenTelemetry for PHP/Laravel and Node.js: wire up the Collector, trace requests end to end, and view spans in Jaeger or Tempo to debug faster.”,
“faqs”: [
{
“question”: “Do I really need the OpenTelemetry Collector, or can my apps send traces directly?”,
“answer”: “Great question! You can send traces directly, but I like the Collector as a routing hub. It batches, retries, and lets you switch between Jaeger or Tempo without redeploying code. It’s the small piece that keeps the rest flexible.”
},
{
“question”: “Should I pick Jaeger or Tempo for my first tracing backend?”,
“answer”: “Start with whichever gets you seeing traces faster. Jaeger’s all‑in‑one is super quick for local dev. Tempo is lovely if you’re deep in Grafana already. The best part is you’re not locked in—OpenTelemetry plus the Collector makes switching a config change.”
},
{
“question”: “How do I avoid leaking sensitive data in traces?”,
“answer”: “Keep PII out of span attributes and events. Use stable internal IDs instead of emails or tokens. If in doubt, drop or rename risky attributes in the Collector. A tiny PR checklist—“are we recording anything sensitive?”—goes a long way.”
}
]
}

Frequently Asked Questions

omposer require open-telemetry/opentelemetry # If your setup separates exporters/extensions, you may add (names can evolve): # composer require open-telemetry/exporter-otlp

Now, initialize a tracer provider early. One practical spot is bootstrap/app.php or a dedicated service provider that runs on boot. The idea is to set the global tracer provider and point it to the Collector via OTLP.

<?php
// app/Providers/TelemetryServiceProvider.php (example)
namespace App\Providers;                                        

lass TelemetryServiceProvider extends ServiceProvider { public function register(): void { // No-op }

lass TraceContextProcessor implements ProcessorInterface { public function __invoke(array $record): array { $spanContext = Span::getCurrent()->getContext(); if ($spanContext && $spanContext->isValid()) { $record['extra']['trace_id'] = $spanContext->getTraceId(); $record['extra']['span_id'] = $spanContext->getSpanId(); } return $record; } }

Register that processor in config/logging.php and you’ll be able to jump from a log line to the exact trace in Jaeger or Tempo. If you also centralize logs, you can wire them together in Grafana. I’ve shown the pattern in my Loki + Promtail + Grafana playbook for centralized logging.

Node.js: Express, Workers, and a Tracer That Doesn’t Get in Your Way

Install and initialize

On the Node.js side, I like using the all-in-one Node SDK with auto-instrumentations. It covers HTTP, Express, common databases, and queue libraries. Then I add a couple of manual spans where the business logic deserves a name.

# In your Node.js service
npm i @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node \
      @opentelemetry/exporter-trace-otlp-http @opentelemetry/semantic-conventions

Create a small tracing.js (or tracing.ts) that runs before your app. A common pattern is to import it at the top of your server.js, or use node -r ./tracing.js server.js so it boots first.

// tracing.js
'use strict';
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');                                        

onst sdk = new NodeSDK({ traceExporter: exporter, resource: new Resource({ [SemanticResourceAttributes.SERVICE_NAME]: process.env.OTEL_SERVICE_NAME || 'node-worker', [SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]: process.env.OTEL_ENV || 'development', [SemanticResourceAttributes.SERVICE_VERSION]: process.env.OTEL_SERVICE_VERSION || '1.0.0', }), instrumentations: [getNodeAutoInstrumentations()], // Sampler can be set via env: OTEL_TRACES_SAMPLER=parentbased_traceidratio, OTEL_TRACES_SAMPLER_ARG=0.2 });

onst { trace } = require('@opentelemetry/api');

onst pino = require('pino'); const { context, trace } = require('@opentelemetry/api');