- Updated: March 18, 2026
- 7 min read
Streaming OpenClaw Rating API Events to Google BigQuery: A Step‑by‑Step Guide
Answer: To stream OpenClaw Rating API events into Google BigQuery in real time, enable the OpenClaw Rating API, create a Pub/Sub topic, grant the necessary IAM roles, design a flat‑table schema in BigQuery, and connect the two using a Dataflow pipeline or a Cloud Functions subscriber that writes each event directly to the target table.
Introduction
Developers and data engineers often ask, “How can I capture every rating event from OpenClaw and analyse it instantly on Google Cloud?” The answer lies in a serverless, event‑driven pipeline that moves data from OpenClaw’s Rating API to BigQuery without batch delays. This guide walks you through every configuration step, from API activation to a ready‑to‑run BigQuery query, so you can start building large‑scale, real‑time analytics dashboards today.
Prerequisites
- A Google Cloud project with billing enabled.
- An active OpenClaw account with access to the Rating API.
- Basic familiarity with Pub/Sub, Dataflow, and Cloud Functions.
- Permission to create service accounts and assign IAM roles in your GCP project.
Setting up OpenClaw Rating API
Enable the API
Log in to the OpenClaw dashboard, navigate to Integrations → Rating API, and toggle the Enable switch. This action registers your account to emit rating events.
Generate credentials
OpenClaw requires a JSON service‑account key to authenticate when publishing to Pub/Sub. Follow these steps:
- In the Google Cloud Console, go to IAM & Admin → Service Accounts.
- Create a new service account named
openclaw-publisher. - Grant the role
Pub/Sub Publisher(you’ll add more roles later). - Click Keys → Add Key → Create New Key (JSON) and download the file.
- Upload the JSON file to the OpenClaw Credentials section.
Configuring IAM permissions
To keep the pipeline secure and maintain the principle of least privilege, assign the following roles:
| Component | Required Role | Typical Service Account |
|---|---|---|
| Pub/Sub Topic (publisher) | roles/pubsub.publisher | openclaw-publisher |
| Pub/Sub Subscription (subscriber) | roles/pubsub.subscriber | dataflow-worker |
| BigQuery Dataset (writer) | roles/bigquery.dataEditor | dataflow-worker |
After creating the service accounts, bind the roles using gcloud:
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:openclaw-publisher@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/pubsub.publisher"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:dataflow-worker@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/pubsub.subscriber"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:dataflow-worker@$PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/bigquery.dataEditor"Designing the BigQuery schema
A well‑thought‑out schema makes downstream analytics painless. Because rating events are flat and time‑series oriented, a single table often suffices.
Recommended table: openclaw_ratings
- event_id (STRING) – Unique identifier generated by OpenClaw.
- user_id (STRING) – Anonymous or hashed user reference.
- item_id (STRING) – The product or content being rated.
- rating_value (INTEGER) – Numeric rating (e.g., 1‑5).
- rating_timestamp (TIMESTAMP) – Event ingestion time.
- metadata (JSON) – Optional free‑form data for future extensions.
Create the table with partitioning on rating_timestamp to keep query costs low:
bq mk --time_partitioning_type=DAY \
--schema=event_id:STRING,user_id:STRING,item_id:STRING,rating_value:INTEGER,rating_timestamp:TIMESTAMP,metadata:JSON \
$PROJECT_ID:analytics.openclaw_ratingsStreaming rating events to BigQuery
Create a Pub/Sub topic and subscription
gcloud pubsub topics create openclaw-rating-events
gcloud pubsub subscriptions create openclaw-rating-sub \
--topic=openclaw-rating-events \
--ack-deadline=30Option 1: Dataflow (Apache Beam) pipeline
Dataflow offers auto‑scaling and exactly‑once delivery. Below is a minimal Python Beam pipeline that reads from Pub/Sub, parses the JSON payload, and writes to BigQuery.
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
class ParseRatingFn(beam.DoFn):
def process(self, element):
import json, datetime
record = json.loads(element)
# Ensure timestamp is in RFC3339 format
record['rating_timestamp'] = datetime.datetime.fromtimestamp(
record['rating_timestamp']).isoformat()
yield record
options = PipelineOptions(
project='YOUR_PROJECT_ID',
runner='DataflowRunner',
temp_location='gs://YOUR_BUCKET/tmp',
region='us-central1',
job_name='openclaw-to-bq'
)
with beam.Pipeline(options=options) as p:
(p
| 'ReadPubSub' >> beam.io.ReadFromPubSub(topic='projects/YOUR_PROJECT_ID/topics/openclaw-rating-events')
| 'Decode' >> beam.Map(lambda x: x.decode('utf-8'))
| 'ParseJSON' >> beam.ParDo(ParseRatingFn())
| 'WriteBQ' >> beam.io.WriteToBigQuery(
table='YOUR_PROJECT_ID:analytics.openclaw_ratings',
schema='event_id:STRING,user_id:STRING,item_id:STRING,rating_value:INTEGER,rating_timestamp:TIMESTAMP,metadata:JSON',
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_NEVER)
)Option 2: Cloud Functions (Node.js) subscriber
If you prefer a lightweight approach, a Cloud Function can pull messages and insert them directly.
exports.streamRating = async (event, context) => {
const {PubSub} = require('@google-cloud/pubsub');
const {BigQuery} = require('@google-cloud/bigquery');
const bigquery = new BigQuery();
const data = Buffer.from(event.data, 'base64').toString();
const record = JSON.parse(data);
const rows = [{
event_id: record.event_id,
user_id: record.user_id,
item_id: record.item_id,
rating_value: record.rating_value,
rating_timestamp: new Date(record.rating_timestamp * 1000).toISOString(),
metadata: record.metadata || {}
}];
await bigquery
.dataset('analytics')
.table('openclaw_ratings')
.insert(rows);
};Deploy with:
gcloud functions deploy streamRating \
--runtime nodejs18 \
--trigger-topic openclaw-rating-events \
--service-account dataflow-worker@$PROJECT_ID.iam.gserviceaccount.com \
--region us-central1Simple BigQuery query example
Once data lands in openclaw_ratings, you can run analytical queries instantly. The following example shows the average rating per item for the last 7 days:
SELECT
item_id,
AVG(rating_value) AS avg_rating,
COUNT(*) AS total_votes
FROM
`YOUR_PROJECT_ID.analytics.openclaw_ratings`
WHERE
rating_timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY)
GROUP BY
item_id
ORDER BY
avg_rating DESC
LIMIT 20;Publishing the article on UBOS
When you’re ready to share this guide with the developer community, the UBOS platform makes publishing effortless. Use the Web app editor on UBOS to create a new blog post, paste the HTML above, and preview with Tailwind styling. If you need a quick start template, explore the UBOS templates for quick start—they include pre‑configured SEO meta tags and schema.org markup.
For readers who want to host their own OpenClaw instance, UBOS offers a one‑click deployment on the OpenClaw hosting on UBOS page. This eliminates the need to manage servers, letting you focus on data pipelines instead.
Additional UBOS resources that complement this tutorial:
- UBOS homepage – Overview of the platform’s capabilities.
- UBOS platform overview – Deep dive into the modular architecture.
- Enterprise AI platform by UBOS – Scaling AI workloads across the organization.
- UBOS partner program – Join a network of technology partners.
- AI marketing agents – Automate campaign creation with generative AI.
- Workflow automation studio – Build no‑code pipelines that complement the code‑first approach described here.
- UBOS pricing plans – Choose a plan that matches your usage.
- UBOS portfolio examples – See real‑world implementations of data pipelines.
Conclusion and next steps
By following the steps above, you now have a fully automated, real‑time pipeline that captures every OpenClaw rating event and stores it in a query‑ready BigQuery table. This foundation enables:
- Live dashboards that surface user sentiment as it happens.
- Machine‑learning models that train on fresh data every hour.
- Alerting mechanisms that trigger when anomalous rating spikes occur.
Future enhancements could include:
- Enriching events with user‑profile data from Chroma DB integration.
- Generating audio summaries of rating trends using the ElevenLabs AI voice integration.
- Connecting the pipeline to a AI Chatbot template for on‑demand analytics queries.
Start building, iterate quickly, and let the data drive your product decisions. Happy streaming!
Source: Original news article