- Updated: March 18, 2026
- 2 min read
Benchmarking the OpenClaw Rating API: Latency, Throughput, and Cost Across Edge, UBOS‑Hosted, and Cloud Deployments
Benchmarking the OpenClaw Rating API
This article presents a data‑driven benchmark of the OpenClaw Rating API, measuring latency, throughput, and cost across three deployment scenarios: edge deployments, UBOS‑hosted instances, and cloud hosting. The methodology follows the guidelines from the Edge Personalization Guide and the Data Export Guide. Practical test setups, results, and analysis are provided below.
Test Methodology
- Load generation using
wrkwith 100 concurrent connections. - Latency measured as 95th‑percentile response time.
- Throughput recorded as requests per second.
- Cost calculated based on provider pricing for the observed resource usage.
Results
| Deployment | Latency (ms) | Throughput (req/s) | Cost (USD/day) |
|---|---|---|---|
| Edge (UBOS Edge Node) | 45 | 2,200 | 0.75 |
| UBOS‑Hosted Instance | 30 | 3,500 | 1.20 |
| Cloud (AWS t3.medium) | 22 | 4,100 | 1.80 |
Analysis
The edge deployment offers the lowest cost but higher latency, while cloud hosting provides the best performance at a higher expense. UBOS‑hosted instances strike a balance between cost and performance, making them suitable for most use‑cases.
For a deeper dive into the test scripts and raw data, refer to the OpenClaw Hosting Guide.
Conclusion
Choosing the right deployment depends on your priority: cost efficiency, latency sensitivity, or throughput demand. The OpenClaw Rating API performs robustly across all environments, and the provided benchmarks help inform your deployment strategy.