- Updated: March 17, 2026
- 8 min read
Building Real‑Time Plugin Recommendations for OpenClaw Agents with the Rating API
Real‑time plugin recommendations for OpenClaw agents are generated by consuming the Rating API, normalizing the data, profiling each agent, and applying a weighted scoring algorithm that runs on‑the‑fly inside a lightweight Node‑RED service.
Building Real‑Time Plugin Recommendations for OpenClaw Agents with the Rating API
1. Introduction
OpenClaw agents are the execution backbone of modern AI‑driven workflows. As the ecosystem of plugins expands, agents need a dynamic, data‑driven way to surface the most relevant extensions at the moment they are needed. Real‑time recommendations improve:
- Task success rates by selecting the highest‑rated plugin for a given context.
- Developer productivity – agents no longer require hard‑coded plugin maps.
- Customer satisfaction – users see smarter, context‑aware suggestions.
This guide walks you through the entire pipeline: from pulling the Rating API dataset, through building a scoring engine in Node‑RED, to deploying a containerized recommendation micro‑service that scales with Kubernetes.
2. Understanding the Rating API
2.1 Dataset structure and key fields
The Rating API returns a JSON array where each record represents a user‑generated rating for a plugin. The most useful fields are:
| Field | Type | Description |
|---|---|---|
| plugin_id | string | Unique identifier of the plugin. |
| rating | float (0‑5) | User‑submitted score. |
| user_id | string | Identifier of the rater (used for profiling). |
| timestamp | ISO‑8601 | When the rating was submitted – crucial for recency weighting. |
| tags | array[string] | Semantic tags (e.g., “image‑gen”, “nlp”). |
2.2 Accessing the API (authentication, endpoints)
The Rating API is protected by a bearer token. Obtain the token from your OpenClaw dashboard, then call the endpoint:
GET https://api.openclaw.io/v1/ratings
Headers:
Authorization: Bearer <YOUR_TOKEN>
Accept: application/json
For large installations, pagination is supported via ?page=<num>&size=<num>. The API also offers a bulk export (/v1/ratings/export) for offline training.
3. Consuming the Rating Dataset
3.1 Fetching ratings in Node‑RED
Node‑RED provides a visual flow engine that fits perfectly into OpenClaw’s micro‑service architecture. The following flow fetches the latest ratings every 60 seconds, normalizes them, and stores them in an in‑memory cache (Redis is recommended for production).
{
"id":"rating-fetcher",
"type":"tab",
"label":"Rating API Consumer",
"nodes":[
{
"id":"1a2b3c",
"type":"inject",
"z":"rating-fetcher",
"name":"Every 60 s",
"props":[{"p":"payload"}],
"repeat":"60",
"crontab":"",
"once":true,
"onceDelay":0.1,
"topic":"",
"payload":"",
"payloadType":"date",
"x":140,
"y":80,
"wires":[["http-request"]]
},
{
"id":"http-request",
"type":"http request",
"z":"rating-fetcher",
"name":"GET /ratings",
"method":"GET",
"ret":"obj",
"paytoqs":"ignore",
"url":"https://api.openclaw.io/v1/ratings",
"tls":"",
"persist":false,
"proxy":"",
"authType":"bearer",
"bearer":"{{env.OPENCLAW_TOKEN}}",
"x":340,
"y":80,
"wires":[["function-normalize"]]
},
{
"id":"function-normalize",
"type":"function",
"z":"rating-fetcher",
"name":"Normalize & Cache",
"func":"// Convert timestamps to epoch & compute a simple weight\\nconst now = Date.now();\\nmsg.payload = msg.payload.map(r => {\\n const ageHours = (now - new Date(r.timestamp).getTime())/3600000;\\n const recencyWeight = Math.max(0, 1 - ageHours/168); // 1‑week decay\\n return {\\n pluginId: r.plugin_id,\\n rating: r.rating,\\n userId: r.user_id,\\n tags: r.tags,\\n weight: recencyWeight * r.rating\\n };\\n});\\nreturn msg;",
"outputs":1,
"noerr":0,
"initialize":"",
"finalize":"",
"libs":[],
"x":560,
"y":80,
"wires":[["redis-out"]]
},
{
"id":"redis-out",
"type":"redis out",
"z":"rating-fetcher",
"name":"Cache ratings",
"command":"set",
"key":"openclaw:ratings",
"field":"",
"expire":"300",
"x":770,
"y":80,
"wires":[[]]
}
]
}3.2 Normalizing and storing data
Normalization steps include:
- Convert ISO timestamps to epoch milliseconds.
- Apply a recency decay (e.g., linear decay over 7 days).
- Calculate a weighted rating = rating × recencyWeight.
- Group by
plugin_idto compute aggregate scores (average, median, count).
The aggregated structure stored in Redis looks like:
{
"pluginId": "gpt‑image‑gen‑v2",
"avgScore": 4.3,
"ratingCount": 128,
"lastSeen": 1713456000000,
"tags": ["image‑gen","stable‑diffusion"]
}4. Personalized On‑the‑Fly Recommendation Logic
4.1 User/agent profiling
Each OpenClaw agent carries a lightweight profile in its execution context:
- Domain tags – derived from the current task (e.g., “nlp”, “audio”).
- Historical usage – plugins the agent has called in the last 30 days.
- Preference weight – optional per‑user bias (e.g., “prefer open‑source”).
4.2 Scoring algorithm (weighted rating, recency, relevance)
The final recommendation score S for a candidate plugin p is computed as:
S(p) = α·R(p) + β·C(p) + γ·U(p)where:
- R(p) – normalized weighted rating from the Rating API (0‑1).
- C(p) – contextual relevance: Jaccard similarity between task tags and plugin tags.
- U(p) – usage boost: +0.1 if the agent has used the plugin before.
Coefficients α, β, γ are tunable; a typical starting point is α=0.5, β=0.4, γ=0.1.
4.3 Real‑time decision flow
The decision flow can be expressed as a three‑step pipeline:
- Lookup – Pull the cached aggregate scores for plugins matching the task tags.
- Score – Apply the formula above for each candidate.
- Select – Return the top‑N plugins (default N=3) to the OpenClaw runtime.
Because the cache lives in Redis and the scoring function is pure JavaScript, the entire process finishes in < 10 ms, satisfying real‑time constraints.
5. Code Snippets
5.1 JavaScript scoring function
// utils.js
function jaccard(setA, setB) {
const a = new Set(setA);
const b = new Set(setB);
const intersection = new Set([...a].filter(x => b.has(x)));
const union = new Set([...a, ...b]);
return intersection.size / union.size;
}
// recommendation.js
async function recommend(taskTags, agentId, redisClient) {
const cached = await redisClient.get('openclaw:ratings');
const plugins = JSON.parse(cached);
// Filter by tag overlap
const candidates = plugins.filter(p => jaccard(p.tags, taskTags) > 0);
// Retrieve agent usage history (simplified)
const usageKey = `agent:${agentId}:history`;
const usedPlugins = await redisClient.smembers(usageKey);
const α = 0.5, β = 0.4, γ = 0.1;
const scores = candidates.map(p => {
const R = p.avgScore / 5; // normalize 0‑1
const C = jaccard(p.tags, taskTags);
const U = usedPlugins.includes(p.pluginId) ? 0.1 : 0;
const S = α * R + β * C + γ * U;
return { pluginId: p.pluginId, score: S };
});
// Sort descending and pick top‑3
scores.sort((a, b) => b.score - a.score);
return scores.slice(0, 3);
}5.2 Example API call with curl
curl -X POST https://api.openclaw.io/v1/agents/42/recommend \
-H "Authorization: Bearer $OPENCLAW_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"task_tags": ["image-gen","stable-diffusion"],
"agent_id": "agent-007"
}'5.3 Full Node‑RED flow (JSON export)
The JSON below combines the fetch flow from section 3 with a second sub‑flow that performs the scoring when an OpenClaw runtime sends a recommendation request over HTTP.
{
"id":"recommendation-service",
"type":"tab",
"label":"OpenClaw Recommendation Service",
"nodes":[
// ... (fetch flow from earlier) ...
{
"id":"http-in",
"type":"http in",
"z":"recommendation-service",
"name":"POST /recommend",
"url":"/recommend",
"method":"post",
"upload":false,
"swaggerDoc":"",
"x":140,
"y":260,
"wires":[["parse-body"]]
},
{
"id":"parse-body",
"type":"json",
"z":"recommendation-service",
"name":"Parse JSON",
"property":"payload",
"action":"",
"pretty":false,
"x":340,
"y":260,
"wires":[["score-function"]]
},
{
"id":"score-function",
"type":"function",
"z":"recommendation-service",
"name":"Score & Select",
"func":"// Re‑use the scoring logic from utils.js\\nconst taskTags = msg.payload.task_tags;\\nconst agentId = msg.payload.agent_id;\\nconst redis = global.get('redis');\\nreturn (async () => {\\n const recs = await require('./recommendation.js').recommend(taskTags, agentId, redis);\\n msg.payload = recs;\\n return msg;\\n})();",
"outputs":1,
"noerr":0,
"initialize":"",
"finalize":"",
"libs":[],
"x":560,
"y":260,
"wires":[["http-response"]]
},
{
"id":"http-response",
"type":"http response",
"z":"recommendation-service",
"name":"Return JSON",
"statusCode":"",
"headers":{},
"x":770,
"y":260,
"wires":[]
}
]
}6. Deployment Tips
6.1 Containerizing the recommendation service
A minimal Dockerfile keeps the image under 80 MB:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app ./
EXPOSE 8080
CMD ["node","dist/index.js"]6.2 Scaling with Kubernetes
Deploy the container as a Deployment with a HorizontalPodAutoscaler that watches CPU usage. Example manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: openclaw-recommender
spec:
replicas: 2
selector:
matchLabels:
app: recommender
template:
metadata:
labels:
app: recommender
spec:
containers:
- name: recommender
image: ghcr.io/yourorg/openclaw-recommender:latest
ports:
- containerPort: 8080
env:
- name: REDIS_HOST
value: redis-service
- name: OPENCLAW_TOKEN
valueFrom:
secretKeyRef:
name: openclaw-secret
key: token
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: recommender-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: openclaw-recommender
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 606.3 Monitoring and logging
- Expose Prometheus metrics (request latency, cache hit ratio).
- Ship logs to a centralized ELK stack; include
agentIdandpluginIdfor traceability. - Set up alerts for rating‑cache expiration or unusually low recommendation scores.
7. Conclusion
By leveraging the Rating API, normalizing its data, and applying a lightweight, weighted scoring algorithm inside Node‑RED, you can deliver personalized, real‑time plugin recommendations to every OpenClaw agent. The approach scales from a single‑node prototype to a Kubernetes‑native micro‑service, and it integrates seamlessly with existing OpenClaw workflows.
Next steps include:
- Experiment with additional signals (e.g., A/B test conversion rates).
- Fine‑tune the α, β, γ coefficients using Bayesian optimization.
- Publish the recommendation service as a reusable OpenClaw plugin via the UBOS marketplace.
When the recommendation engine runs in production, you’ll notice higher plugin adoption, fewer failed tasks, and a measurable boost in overall agent efficiency—exactly the outcomes modern AI‑first enterprises demand.