- Updated: March 17, 2026
- 3 min read
Adding Distributed Tracing to OpenClaw on UBOS with OpenTelemetry
Adding Distributed Tracing to OpenClaw on UBOS with OpenTelemetry
In this guide we walk developers through instrumenting OpenClaw running on UBOS with OpenTelemetry, exporting spans to Jaeger or Grafana Tempo, and correlating those traces with existing Prometheus/Grafana metrics.
Prerequisites
- UBOS node with OpenClaw installed.
- Access to the UBOS dashboard.
- OpenTelemetry SDK for Go (or the language OpenClaw is written in).
- Jaeger or Grafana Tempo endpoint reachable from the UBOS node.
- Prometheus and Grafana already scraping OpenClaw metrics.
1. Install OpenTelemetry Collector on UBOS
Use the UBOS app store or the ubos app install command to deploy the OpenTelemetry Collector. Configure the collector to receive traces via OTLP and forward them to your chosen backend (Jaeger or Tempo).
receivers:
otlp:
protocols:
grpc:
http:
exporters:
jaeger:
endpoint: "jaeger:14250"
# or for Tempo
otlp:
endpoint: "tempo:4317"
service:
pipelines:
traces:
receivers: [otlp]
exporters: [jaeger]
2. Instrument OpenClaw
Add the OpenTelemetry SDK to the OpenClaw codebase. Initialise a tracer provider that points to the collector.
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
"go.opentelemetry.io/otel/sdk/trace"
"go.opentelemetry.io/otel/semconv/v1.4.0"
)
func initTracer() {
ctx := context.Background()
exporter, _ := otlptracegrpc.New(ctx, otlptracegrpc.WithEndpoint("collector:4317"), otlptracegrpc.WithInsecure())
tp := trace.NewTracerProvider(trace.WithBatcher(exporter))
otel.SetTracerProvider(tp)
otel.SetTextMapPropagator(propagation.TraceContext{})
}
Wrap key request handlers with spans:
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ctx, span := otel.Tracer("openclaw").Start(r.Context(), "handle-request")
defer span.End()
// existing logic
_ = ctx // pass ctx downstream if needed
}
3. Export Traces to Jaeger or Grafana Tempo
The collector configuration from step 1 already forwards spans. Ensure the backend is reachable and that the appropriate ports are open on the UBOS node.
4. Correlate Traces with Prometheus Metrics
Both OpenTelemetry and Prometheus expose a trace_id label on metrics that you can use to join data in Grafana.
- Enable the
otelcolmetric exporter to Prometheus. - In Grafana, create a dashboard that shows a trace view (Jaeger/Tempo) alongside a time‑series panel that filters on the same
trace_idlabel.
5. Verify the Setup
- Generate traffic to OpenClaw (e.g., run a few API calls).
- Open Jaeger UI (
http://jaeger:16686) or Tempo UI and locate the recent traces. - Open Grafana, go to the OpenClaw metrics dashboard, and filter by the trace ID shown in the trace view. You should see CPU, memory, and request latency metrics aligned with the trace.
Conclusion
By adding OpenTelemetry instrumentation to OpenClaw, exporting spans to Jaeger or Grafana Tempo, and linking those spans to Prometheus metrics, developers gain end‑to‑end visibility of request lifecycles on UBOS. This makes debugging performance bottlenecks and understanding system behaviour much easier.
For more details on hosting OpenClaw on UBOS, visit the OpenClaw production hosting guide.