- Updated: March 24, 2026
- 8 min read
Implementing Metrics, Logging, and Tracing for the OpenClaw Full‑Stack Template
To add full observability to the OpenClaw Full‑Stack Template you need to configure three pillars: metrics with Prometheus / Grafana, logging with Winston / Logstash, and distributed tracing with OpenTelemetry / Jaeger.
Introduction
OpenClaw is a powerful full‑stack starter kit that accelerates SaaS development on the UBOS homepage. While its scaffolding gives you authentication, CRUD APIs, and a React front‑end out of the box, production‑grade applications still need robust observability. Without metrics, logs, and traces you cannot detect performance regressions, debug failures, or meet SLA commitments.
This hands‑on guide walks developers and DevOps engineers through the exact steps to instrument OpenClaw with industry‑standard tools, providing ready‑to‑copy code snippets, configuration files, and troubleshooting tips. By the end you’ll have a live dashboard, searchable logs, and end‑to‑end request tracing—all integrated with UBOS’s UBOS platform overview.
Prerequisites
- Node.js ≥ 18 and npm ≥ 9 installed locally.
- Docker ≥ 20.10 (for running Prometheus, Grafana, Jaeger, and Logstash containers).
- An existing OpenClaw project cloned from the OpenClaw hosting page.
- Basic familiarity with TypeScript, Express, and React.
- Access to a UBOS account to explore Enterprise AI platform by UBOS if you plan to scale.
1️⃣ Setting Up Metrics (Prometheus & Grafana)
Metrics give you a quantitative view of system health—CPU usage, request latency, error rates, and more. Prometheus scrapes HTTP endpoints exposing /metrics in the Prometheus exposition format, while Grafana visualizes those time‑series data.
a. Install Prometheus & Grafana with Docker‑Compose
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
depends_on:
- prometheus
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
b. Create prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'openclaw'
static_configs:
- targets: ['host.docker.internal:4000'] # Express server exposing metrics
c. Instrument Express with prom-client
Install the library and expose a metrics endpoint.
npm install prom-client
// src/metrics.ts
import { collectDefaultMetrics, Registry, Counter, Histogram } from 'prom-client';
const register = new Registry();
collectDefaultMetrics({ register });
export const httpRequestDuration = new Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'code'],
buckets: [0.05, 0.1, 0.3, 0.5, 1, 3],
registers: [register],
});
export const httpRequestsTotal = new Counter({
name: 'http_requests_total',
help: 'Total number of HTTP requests',
labelNames: ['method', 'route', 'code'],
registers: [register],
});
export const metricsMiddleware = (req, res, next) => {
const end = httpRequestDuration.startTimer();
res.on('finish', () => {
const { method, originalUrl: route } = req;
const { statusCode: code } = res;
httpRequestsTotal.inc({ method, route, code });
end({ method, route, code });
});
next();
};
export const metricsEndpoint = (req, res) => {
res.set('Content-Type', register.contentType);
res.end(register.metrics());
};
d. Wire Middleware into OpenClaw Server
// src/server.ts
import express from 'express';
import { metricsMiddleware, metricsEndpoint } from './metrics';
const app = express();
app.use(metricsMiddleware);
// ... existing routes ...
app.get('/metrics', metricsEndpoint);
app.listen(4000, () => console.log('Server running on :4000'));
After restarting the server, open http://localhost:9090/targets to verify that Prometheus sees the openclaw job. Then log into Grafana (http://localhost:3000) and add Prometheus as a data source to start building dashboards.
2️⃣ Setting Up Logging (Winston & Logstash)
Structured logs are essential for debugging and for feeding security information into SIEMs. Winston provides a flexible transport system, while Logstash aggregates logs from containers and forwards them to Elasticsearch or any other sink.
a. Install Winston and a JSON formatter
npm install winston winston-daily-rotate-file
// src/logger.ts
import winston from 'winston';
import 'winston-daily-rotate-file';
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
defaultMeta: { service: 'openclaw' },
transports: [
new winston.transports.DailyRotateFile({
filename: 'logs/openclaw-%DATE%.log',
datePattern: 'YYYY-MM-DD',
zippedArchive: true,
maxSize: '20m',
maxFiles: '14d',
}),
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
),
}),
],
});
export default logger;
b. Replace console.log calls
// src/routes/user.ts
import logger from '../logger';
router.post('/register', async (req, res) => {
try {
// registration logic …
logger.info('User registration successful', { userId: newUser.id });
res.status(201).json(newUser);
} catch (err) {
logger.error('Registration failed', { error: err.message });
res.status(500).json({ error: 'Internal Server Error' });
}
});
c. Spin up Logstash with Docker
# docker-compose.yml (add Logstash)
logstash:
image: docker.elastic.co/logstash/logstash:8.9.0
ports:
- "5000:5000"
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
d. Simple Logstash pipeline (logstash.conf)
input {
file {
path => "/usr/src/app/logs/*.log"
start_position => "beginning"
sincedb_path => "/dev/null"
codec => json
}
}
output {
stdout { codec => rubydebug }
# Uncomment to send to Elasticsearch
# elasticsearch { hosts => ["elasticsearch:9200"] index => "openclaw-logs-%{+YYYY.MM.dd}" }
}
Now every Winston JSON log lands in Logstash, which can forward it to Elasticsearch, Splunk, or any downstream analytics platform.
3️⃣ Setting Up Tracing (OpenTelemetry & Jaeger)
Distributed tracing lets you follow a request as it hops from the API gateway to the database and back to the UI. OpenTelemetry is the vendor‑agnostic SDK, while Jaeger provides a UI for visualizing spans.
a. Add OpenTelemetry packages
npm install @opentelemetry/api @opentelemetry/sdk-node @opentelemetry/instrumentation-express @opentelemetry/instrumentation-http @opentelemetry/exporter-jaeger
b. Initialize OpenTelemetry in a separate file
// src/tracing.ts
import { NodeSDK } from '@opentelemetry/sdk-node';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { JaegerExporter } from '@opentelemetry/exporter-jaeger';
const exporter = new JaegerExporter({
endpoint: 'http://localhost:14268/api/traces',
});
const sdk = new NodeSDK({
traceExporter: exporter,
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start()
.then(() => console.log('🛠️ OpenTelemetry initialized'))
.catch((err) => console.error('❌ OpenTelemetry failed to start', err));
c. Run Jaeger with Docker‑Compose
# docker-compose.yml (add Jaeger)
jaeger:
image: jaegertracing/all-in-one:1.53
ports:
- "16686:16686" # UI
- "14268:14268" # Collector
d. Import tracing before the server starts
// src/server.ts (top)
import './tracing'; // must be first import
import express from 'express';
...
Visit http://localhost:16686 after making a few API calls; you’ll see a trace per request, complete with DB query spans automatically captured by the OpenTelemetry instrumentation.
4️⃣ Step‑by‑Step Code Examples for Each Component
Metrics Example
// src/routes/product.ts
router.get('/list', async (req, res) => {
const start = Date.now();
const products = await Product.findAll();
const duration = (Date.now() - start) / 1000;
httpRequestDuration.observe({ method: 'GET', route: '/products/list', code: 200 }, duration);
res.json(products);
});
Logging Example
// src/middleware/errorHandler.ts
import logger from '../logger';
export const errorHandler = (err, req, res, next) => {
logger.error('Unhandled exception', {
message: err.message,
stack: err.stack,
path: req.path,
method: req.method,
});
res.status(500).json({ error: 'Something went wrong' });
};
Tracing Example
// src/services/payment.ts
import { trace } from '@opentelemetry/api';
export const chargeCard = async (cardInfo) => {
const span = trace.getTracer('openclaw').startSpan('chargeCard');
try {
// simulate external payment gateway call
await externalGateway.charge(cardInfo);
span.setStatus({ code: 1 });
} catch (e) {
span.recordException(e);
span.setStatus({ code: 2, message: e.message });
throw e;
} finally {
span.end();
}
};
Docker‑Compose Snapshot
services:
prometheus: …
grafana: …
logstash: …
jaeger: …
app:
build: .
ports: ["4000:4000"]
volumes:
- .:/usr/src/app
5️⃣ Configuration Tips & Common Pitfalls
- Metric naming consistency: Follow the
snake_caseconvention and prefix all custom metrics withopenclaw_to avoid collisions in shared Prometheus instances. - Log rotation: Winston’s
DailyRotateFileprevents disk‑full crashes. Verify themaxFilessetting aligns with your retention policy. - Exporter endpoint: When running Docker on macOS, use
host.docker.internalfor the Prometheus target; on Linux, reference the container name directly. - Jaeger sampling: The default sampler records every trace, which can overwhelm storage. Switch to
probabilisticsampling in production:
const sdk = new NodeSDK({
traceExporter: exporter,
sampler: new TraceIdRatioBasedSampler(0.1), // 10 % of requests
});
Another frequent issue is mismatched environment variables between Docker Compose and the Node process. Keep a .env file at the repo root and reference it in both docker-compose.yml and your Node config.
6️⃣ Testing & Validation
Before promoting to production, run the following checks:
- Metrics sanity:
curl http://localhost:4000/metricsshould return a non‑empty payload. In Grafana, import the official Node Exporter dashboard and verify request latency graphs. - Log ingestion: Tail the Logstash container (
docker logs -f logstash) while triggering API calls. Confirm JSON logs appear in the stdout output. - Trace completeness: Open Jaeger UI, search for a recent trace, and ensure spans for HTTP, DB, and any external services are present.
- Load test: Use k6 to generate 100 RPS for 2 minutes. Verify that Prometheus scrapes without errors and that Jaeger does not drop spans.
7️⃣ Leverage UBOS Ecosystem for Faster Delivery
UBOS offers a suite of ready‑made components that can replace hand‑rolled observability pipelines:
- AI marketing agents can automatically tag logs with campaign IDs.
- The Workflow automation studio lets you trigger alerts when a metric crosses a threshold.
- Use the Web app editor on UBOS to embed Grafana panels directly into your admin UI.
- Explore the UBOS templates for quick start—the AI SEO Analyzer template already ships with Prometheus metrics.
Conclusion
By integrating Prometheus‑Grafana metrics, Winston‑Logstash logging, and OpenTelemetry‑Jaeger tracing, you transform the OpenClaw Full‑Stack Template from a development sandbox into a production‑ready service with full observability. The steps above are deliberately modular—pick the pieces you need now and expand later as traffic grows.
Ready to see OpenClaw in action on a managed environment? Deploy it with a single click on the OpenClaw hosting page and start monitoring your SaaS instantly.