- Updated: March 18, 2026
- 6 min read
Instrumenting the OpenClaw Rating API with OpenTelemetry
Instrumenting the OpenClaw Rating API with OpenTelemetry gives you end‑to‑end visibility, letting you trace every rating request, collect latency metrics, and quickly pinpoint performance bottlenecks.
1. Introduction
Observability is no longer a nice‑to‑have; it’s a prerequisite for reliable SaaS services. When you expose a rating engine like OpenClaw to external clients, you need to know exactly how each call behaves in production. This guide walks developers through a complete instrumentation workflow using the OpenTelemetry SDK, from setup to validation, and shows how to publish the final guide on the UBOS documentation portal.
2. Overview of OpenClaw Rating API
OpenClaw provides a RESTful endpoint /rate that accepts a JSON payload with a userId, itemId, and a numeric score. The service calculates a weighted rating, stores it in a PostgreSQL database, and returns the updated aggregate score.
- POST
/rate– submit a new rating. - GET
/rating/{itemId}– retrieve the current aggregate. - DELETE
/rating/{ratingId}– remove a rating (admin only).
3. Why instrument with OpenTelemetry?
OpenTelemetry is a vendor‑agnostic standard that captures traces, metrics, and logs in a single API. By instrumenting OpenClaw you gain:
- Distributed tracing – visualize the flow from API gateway to database.
- Latency metrics – monitor 95th‑percentile response times.
- Error rates – automatically flag HTTP 5xx spikes.
- Future‑proofness – switch exporters (Jaeger, Prometheus, OTLP) without code changes.
4. Prerequisites
Before you start, make sure you have:
- Node.js ≥ 14 or Python ≥ 3.8 (the examples use Node.js).
- A running instance of the OpenClaw Rating API (Docker image available on UBOS).
- Access to an OpenTelemetry collector (Docker‑compose file provided in the UBOS host OpenClaw on UBOS guide).
- Basic familiarity with async/await and Express.js middleware.
5. Step‑by‑step instrumentation guide
5.1 Setting up the OpenTelemetry SDK
Install the required packages:
npm install @opentelemetry/api @opentelemetry/sdk-node \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/exporter-metrics-otlp-httpCreate otel-config.js in the project root:
// otel-config.js
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-http');
const traceExporter = new OTLPTraceExporter({ url: 'http://localhost:4318/v1/traces' });
const metricExporter = new OTLPMetricExporter({ url: 'http://localhost:4318/v1/metrics' });
const sdk = new NodeSDK({
traceExporter,
metricExporter,
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start()
.then(() => console.log('🛰️ OpenTelemetry initialized'))
.catch(err => console.error('Failed to start OpenTelemetry', err));
Import this configuration at the very top of your server.js file so that instrumentation runs before any other module loads.
5.2 Adding spans to API calls
While auto‑instrumentation covers HTTP and database layers, you may want custom spans for business logic (e.g., rating calculation).
// server.js (excerpt)
require('./otel-config'); // Must be first
const express = require('express');
const { trace } = require('@opentelemetry/api');
const app = express();
app.use(express.json());
app.post('/rate', async (req, res) => {
const tracer = trace.getTracer('openclaw-rating');
const span = tracer.startSpan('processRating', {
attributes: {
'http.method': req.method,
'http.route': '/rate',
'user.id': req.body.userId,
'item.id': req.body.itemId,
},
});
try {
// Simulate rating logic
const result = await calculateWeightedScore(req.body);
span.setAttribute('rating.score', result.score);
res.json({ success: true, aggregate: result.aggregate });
} catch (err) {
span.recordException(err);
span.setStatus({ code: 2, message: err.message }); // 2 = ERROR
res.status(500).json({ error: err.message });
} finally {
span.end();
}
});
function calculateWeightedScore({ userId, itemId, score }) {
// Placeholder for real DB interaction
return new Promise(resolve => {
setTimeout(() => {
resolve({ score, aggregate: (score * 0.8).toFixed(2) });
}, 50);
});
}
app.listen(3000, () => console.log('🚀 OpenClaw listening on :3000'));
5.3 Exporting metrics and traces
The SDK configuration already points to an OTLP collector. To visualize data, run the OpenTelemetry Collector with Jaeger and Prometheus exporters:
# docker-compose.yml (excerpt)
version: '3.8'
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/collector.yaml"]
volumes:
- ./collector.yaml:/etc/collector.yaml
ports:
- "4318:4318" # OTLP HTTP
- "16686:16686" # Jaeger UI
- "9090:9090" # Prometheus UI
Sample collector.yaml (focus on OTLP → Jaeger/Prometheus):
# collector.yaml
receivers:
otlp:
protocols:
http:
exporters:
jaeger:
endpoint: "http://jaeger:14250"
prometheus:
endpoint: "0.0.0.0:9090"
service:
pipelines:
traces:
receivers: [otlp]
exporters: [jaeger]
metrics:
receivers: [otlp]
exporters: [prometheus]
6. Complete code example
The following repository structure shows a minimal, production‑ready OpenClaw service with full OpenTelemetry support.
.
├── Dockerfile
├── docker-compose.yml
├── otel-config.js
├── collector.yaml
├── server.js
└── package.json
All files are available in the UBOS GitHub example. Deploy with a single command:
docker-compose up -d7. Best‑practice tips
http.method, db.system, and service.name.8. Testing and validation
After deployment, verify that traces appear in Jaeger and metrics in Prometheus:
- Open Jaeger UI and search for the service name
openclaw-rating. - Check the Prometheus graph for the metric
http_server_duration_seconds. - Run a simple curl test to generate traffic:
curl -X POST http://localhost:3000/rate -H "Content-Type: application/json" -d '{"userId":"u123","itemId":"i456","score":4}'
If you see a new trace with the custom span “processRating”, your instrumentation is successful.
9. Publishing the article on UBOS documentation
UBOS uses a Markdown‑to‑HTML pipeline for its developer portal. Follow these steps:
- Save the article as
openclaw-otel-guide.mdin thedocs/repository. - Run the CI job
npm run docs:buildto generate the HTML. - Open a pull request against the
mainbranch. The CI will automatically validate internal links, including the host OpenClaw on UBOS reference. - After merge, the guide appears under Developers → Observability → OpenTelemetry Integration on the UBOS blog.
10. Conclusion
By following this step‑by‑step guide, you’ve turned a simple rating endpoint into a fully observable service. The collected traces and metrics empower you to detect latency spikes, understand user behavior, and maintain SLA compliance—all without locking into a single vendor. As your OpenClaw deployment scales, you can extend the same instrumentation pattern to other microservices, ensuring a consistent observability strategy across your entire UBOS‑powered stack.
Ready to try it yourself? Host OpenClaw on UBOS today and start visualizing your rating API in real time.