- Updated: March 18, 2026
- 7 min read
Deploying the OpenClaw Rating Service on Kubernetes with Helm
Deploying the OpenClaw rating service on Kubernetes with Helm involves preparing a Kubernetes cluster, configuring a Helm chart, hardening security, enabling high‑availability, and verifying the deployment—all in a reproducible, production‑ready workflow.
1. Introduction
OpenClaw is a lightweight, open‑source rating engine that powers recommendation systems, leaderboards, and trust scores. When you run OpenClaw inside a Kubernetes cluster, you gain elasticity, self‑healing, and a unified deployment pipeline. Helm, the de‑facto package manager for Kubernetes, lets you describe the entire OpenClaw stack—containers, ConfigMaps, Secrets, Services, and Ingress—in a single, version‑controlled chart.
This guide walks developers and DevOps engineers through every step required to launch OpenClaw on Kubernetes using Helm, from prerequisites to post‑install validation. By the end, you’ll have a secure, highly available OpenClaw service ready for production traffic.
2. Prerequisites
Before you start, ensure the following components are in place:
- Kubernetes cluster – A fully functional cluster (v1.22+ recommended) with at least three worker nodes for HA.
- Helm CLI – Helm 3.x installed on your workstation (
helm versionshould return a client version). - OpenClaw source code – Access to the OpenClaw repository or a pre‑built Docker image.
- kubectl – Configured to communicate with your cluster (run
kubectl get nodesto verify). - Domain name (optional) – For Ingress exposure, a DNS record pointing to your load balancer.
3. Helm Chart Configuration
3.1 Chart directory structure
A conventional Helm chart follows this layout:
openclaw/
├── Chart.yaml
├── values.yaml
├── templates/
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ ├── configmap.yaml
│ ├── secret.yaml
│ └── _helpers.tpl
└── charts/ # optional sub‑charts
3.2 values.yaml configuration details
The values.yaml file is the single source of truth for environment‑specific settings. Below is a minimal example tailored for OpenClaw:
# values.yaml
replicaCount: 3
image:
repository: openclaw/openclaw
tag: "latest"
pullPolicy: IfNotPresent
service:
type: LoadBalancer
port: 80
ingress:
enabled: true
className: nginx
host: openclaw.example.com
tls: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "250m"
memory: "256Mi"
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: openclaw-secret
key: database-url
# Security context (see section 4)
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
3.3 Customizing resources and environment variables
Adjust resources to match your workload. For CPU‑intensive rating calculations, increase limits.cpu and requests.cpu. Use the env block to inject runtime configuration such as database connections, external API keys, or feature flags. All secrets must be stored in Kubernetes Secret objects (see section 4).
4. Security Hardening
Running a public rating service demands a defense‑in‑depth approach. The following hardening steps are essential for compliance and resilience.
4.1 RBAC roles and bindings
Create a dedicated ServiceAccount for OpenClaw and bind it to the least‑privilege Role:
# templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "openclaw.fullname" . }}
labels:
{{- include "openclaw.labels" . | nindent 4 }}
# templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "openclaw.fullname" . }}-role
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
# templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "openclaw.fullname" . }}-binding
subjects:
- kind: ServiceAccount
name: {{ include "openclaw.fullname" . }}
roleRef:
kind: Role
name: {{ include "openclaw.fullname" . }}-role
apiGroup: rbac.authorization.k8s.io
4.2 NetworkPolicies for pod isolation
Restrict traffic to only what OpenClaw needs (database, ingress, and health checks):
# templates/networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: {{ include "openclaw.fullname" . }}-np
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: {{ include "openclaw.name" . }}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: {{ include "openclaw.name" . }}
ports:
- protocol: TCP
port: 80
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8 # Example: allow DB subnet
ports:
- protocol: TCP
port: 5432 # PostgreSQL
4.3 Secrets management for credentials
Never hard‑code passwords. Store them in a Secret and reference via valueFrom.secretKeyRef (see values.yaml example). For production, consider external secret stores such as HashiCorp Vault or AWS Secrets Manager and use the external-secrets operator.
4.4 PodSecurityContext & SecurityContext
Enforce non‑root execution and read‑only root filesystem:
# templates/deployment.yaml (excerpt)
spec:
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
fsGroup: {{ .Values.securityContext.fsGroup }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
5. High‑Availability Setup
OpenClaw must stay online even when individual pods or nodes fail. The following patterns guarantee resilience.
5.1 Deploying multiple replicas
Set replicaCount: 3 (or higher) in values.yaml. Kubernetes will automatically spread pods across nodes using the default scheduler, providing fault tolerance.
5.2 Service type LoadBalancer / Ingress configuration
For cloud providers, a LoadBalancer service exposes a stable IP. For on‑prem or multi‑cloud, use an Ingress controller (NGINX, Traefik) with TLS termination. Example Ingress snippet:
# templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "openclaw.fullname" . }}-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: openclaw-tls
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ include "openclaw.fullname" . }}
port:
number: {{ .Values.service.port }}
5.3 Persistent storage considerations
If OpenClaw stores state locally (e.g., SQLite), attach a PersistentVolumeClaim. For stateless deployments backed by an external database, ensure the DB is HA (e.g., PostgreSQL with Patroni).
5.4 Horizontal Pod Autoscaling (HPA)
Enable HPA to scale pods based on CPU or custom metrics (e.g., request latency). Example HPA manifest:
# templates/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "openclaw.fullname" . }}-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "openclaw.fullname" . }}
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
6. Deployment Steps
Follow these commands in order. All commands assume you are in the root of the cloned openclaw Helm chart.
6.1 Adding the Helm repository
# If you host the chart in a private repo, add it:
helm repo add ubos https://charts.ubos.tech
helm repo update
6.2 Installing or upgrading the release
# First install
helm install openclaw ubos/openclaw \
--namespace rating \
--create-namespace \
-f values.yaml \
--set image.tag=v1.2.3 \
--set replicaCount=3
# To upgrade later
helm upgrade openclaw ubos/openclaw \
-f values.yaml \
--set image.tag=v1.2.4
6.3 Post‑install verification commands
# Verify pods are running
kubectl get pods -n rating -l app.kubernetes.io/name=openclaw
# Check service and ingress
kubectl get svc -n rating openclaw
kubectl get ingress -n rating openclaw
# Tail logs for the first pod
kubectl logs -n rating $(kubectl get pods -n rating -l app.kubernetes.io/name=openclaw -o jsonpath="{.items[0].metadata.name}") -f
7. Verification and Testing
After deployment, perform functional and performance checks to ensure the service behaves as expected.
7.1 Basic health endpoint
OpenClaw exposes /healthz. Use curl against the LoadBalancer IP or Ingress host:
curl -k https://openclaw.example.com/healthz
# Expected output: {"status":"ok"}
7.2 Rating API sanity test
Submit a test rating payload:
curl -X POST -H "Content-Type: application/json" \
-d '{"user_id":"u123","item_id":"i456","score":4.5}' \
https://openclaw.example.com/api/v1/rate
7.3 Load testing basics
Use hey or k6 to simulate traffic. Example with hey (1000 requests, 50 concurrent):
hey -n 1000 -c 50 https://openclaw.example.com/api/v1/rate
Monitor CPU/memory via kubectl top pods -n rating and ensure HPA reacts as configured.
8. Conclusion and Next Steps
Deploying OpenClaw with Helm on Kubernetes gives you a repeatable, secure, and scalable rating service. The chart abstracts away boilerplate YAML while still allowing fine‑grained customization for resources, security policies, and HA features.
Next, consider integrating OpenClaw with your existing data pipelines, adding observability (Prometheus metrics, Grafana dashboards), and automating CI/CD with GitOps tools like Argo CD or Flux.
For a deeper dive into hosting OpenClaw on UBOS, see the dedicated guide: OpenClaw hosting guide on UBOS.