- Updated: March 19, 2026
- 2 min read
End‑to‑End Trace Collection for Edge OpenClaw with OpenTelemetry Collector and Jaeger
End‑to‑End Trace Collection for Edge OpenClaw
Collecting distributed traces from an edge OpenClaw deployment gives you deep visibility into request flows, latency bottlenecks, and error patterns. This guide walks you through setting up the OpenTelemetry Collector, sending data to a Jaeger backend, exploring traces in the Jaeger UI, building custom dashboards, and troubleshooting common issues.
1. Deploy the OpenTelemetry Collector
Use the official OpenTelemetry Collector Docker image on each OpenClaw edge node. A minimal otel-collector-config.yaml for Jaeger looks like this:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
jaeger:
endpoint: "jaeger-collector:14250"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [jaeger]
Start the collector with:
docker run -d \
-v $(pwd)/otel-collector-config.yaml:/etc/otelcol/config.yaml \
-p 4317:4317 -p 4318:4318 \
otel/opentelemetry-collector:latest \
--config /etc/otelcol/config.yaml
2. Instrument OpenClaw
Add the OpenTelemetry SDK to your OpenClaw services (Go, Python, Node.js, etc.). Example for a Go service:
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
"go.opentelemetry.io/otel/sdk/trace"
)
func initTracer() {
ctx := context.Background()
exp, _ := otlptracegrpc.New(ctx, otlptracegrpc.WithEndpoint("localhost:4317"), otlptracegrpc.WithInsecure())
tp := trace.NewTracerProvider(trace.WithBatcher(exp))
otel.SetTracerProvider(tp)
}
3. Verify Traces in Jaeger UI
Deploy Jaeger (all‑in‑one) on your Kubernetes cluster or as a Docker compose stack. Open the Jaeger UI at http://:16686. You should see your OpenClaw services listed. Select a service, choose a time range, and click “Find Traces”. The UI lets you drill down into spans, view attributes, and see parent‑child relationships.
4. Create a Custom Jaeger Dashboard
Jaeger integrates with Grafana for richer visualisations. Follow the official Grafana dashboard guide and add panels such as:
- Average latency per OpenClaw endpoint
- Trace error rate over time
- Top‑slowest spans
Save the dashboard and share it with your team.
5. Troubleshooting Common Issues
- No traces appear in Jaeger: Verify the collector can reach the Jaeger exporter (network/firewall). Check collector logs for errors.
- High latency on trace ingestion: Increase the batch size or enable compression in the exporter configuration.
- Missing attributes: Ensure your instrumentation adds relevant span attributes (e.g., request ID, user ID).
6. Reference Guide
For a deeper dive on building telemetry dashboards, see our Telemetry Dashboard Guide that covers metric collection, alerting, and dashboard best practices.
Ready to host OpenClaw on UBOS? Follow the step‑by‑step instructions in our OpenClaw hosting guide.
Happy tracing!