- Updated: March 22, 2026
- 8 min read
Hello World Part 2: Adding a Simple Tool to Your First OpenClaw Agent
In this tutorial you’ll learn how to extend your first OpenClaw agent by adding a simple weather‑API tool and see exactly how Memory, the Gateway, and the Agent Framework collaborate to make the new capability seamless.
1. Introduction
Welcome back to the OpenClaw series on the UBOS blog. If you’re a developer eager to build intelligent agents that can fetch real‑time data, you’re in the right place. This guide walks you through adding a weather‑API tool to the agent you created in Part 1, and it explains the inner workings of the Memory, Gateway, and Agent Framework layers.
By the end of this article you will have a fully functional OpenClaw agent that can answer questions like “What’s the weather in Paris tomorrow?” while persisting context across conversations.
2. Recap of Part 1
In Part 1 we set up a minimal OpenClaw agent that could respond to simple text prompts using the built‑in LLM. The key components we assembled were:
- A basic agent definition written in TypeScript.
- A memory store that kept the last three user messages.
- A gateway that routed user input to the LLM and returned the response.
If you missed that post, you can still follow along by checking the Enterprise AI platform by UBOS overview, which outlines the same architecture.
3. Adding a Simple Tool – Weather API
3.1. Setting up the tool
First, choose a free weather service. For this tutorial we’ll use OpenWeatherMap’s API. Sign up, obtain an API key, and note the endpoint:
https://api.openweathermap.org/data/2.5/weather?q={city}&appid={YOUR_KEY}&units=metricNext, create a TypeScript module weatherTool.ts inside your tools folder:
import axios from 'axios';
export interface WeatherResult {
city: string;
temperature: number;
description: string;
}
export async function getWeather(city: string, apiKey: string): Promise<WeatherResult> {
const url = `https://api.openweathermap.org/data/2.5/weather?q=${encodeURIComponent(city)}&appid=${apiKey}&units=metric`;
const response = await axios.get(url);
const data = response.data;
return {
city: data.name,
temperature: data.main.temp,
description: data.weather[0].description,
};
}This tiny wrapper isolates the HTTP call, making it easy to test and reuse.
3.2. Integrating with the agent
OpenClaw expects tools to follow a simple contract: a name, a description, and an execute method that returns JSON‑serialisable data. Add the weather tool to the agent’s registry:
import { Agent } from '@openclaw/agent';
import { getWeather } from './tools/weatherTool';
const weatherTool = {
name: 'weather',
description: 'Fetches current weather for a given city.',
parameters: {
type: 'object',
properties: {
city: { type: 'string', description: 'Name of the city' },
},
required: ['city'],
},
async execute(args: { city: string }) {
const apiKey = process.env.OPENWEATHER_API_KEY!;
return await getWeather(args.city, apiKey);
},
};
const myAgent = new Agent({
tools: [weatherTool],
// other config stays the same as Part 1
});Notice how the tool’s parameters schema mirrors the OpenAI function‑calling format. OpenClaw will automatically translate a user request like “What’s the temperature in Berlin?” into a function call, invoke weatherTool.execute, and inject the result back into the LLM prompt.
4. How Memory works with the new tool
The Memory layer stores conversational context, which now includes tool results. OpenClaw’s default InMemoryStore serialises each turn as a JSON object:
{
"role": "assistant",
"content": "The weather in Berlin is 12°C with light rain.",
"tool_calls": [
{
"name": "weather",
"arguments": { "city": "Berlin" },
"result": { "city": "Berlin", "temperature": 12, "description": "light rain" }
}
]
}When the next user message arrives, the memory buffer re‑feeds the entire history, allowing the LLM to reason about previous tool calls. This is crucial for multi‑step tasks such as “Give me a 3‑day forecast for Paris and suggest what to wear each day.” The agent will call the weather tool three times, store each result, and then let the LLM generate a cohesive answer.
If you need longer context, consider swapping the in‑memory store for a persistent vector DB. The Chroma DB integration works out‑of‑the‑box with OpenClaw and keeps embeddings for fast similarity search.
5. Role of the Gateway in tool communication
The Gateway is the traffic controller between the agent, the LLM, and external services. When a tool call is detected, the gateway performs three steps:
- Parse the LLM’s function‑call JSON.
- Dispatch the request to the matching tool’s
executemethod. - Inject the tool’s result back into the prompt for the next LLM turn.
Because the gateway centralises this logic, you can add authentication, rate‑limiting, or caching without touching the agent or the tool code. For example, to cache weather responses for 10 minutes you could wrap the execute method with a simple memoiser inside the gateway configuration.
OpenClaw also supports multiple gateways for hybrid deployments. If you want to offload heavy computation to a separate microservice, define a RemoteGateway that forwards the tool call over HTTP. The same ChatGPT and Telegram integration uses a remote gateway to deliver responses instantly to chat users.
6. Interaction within the Agent Framework
The Agent Framework orchestrates the three pillars—Memory, Gateway, and Tools—into a single, reusable object. Its lifecycle looks like this:
| Phase | What Happens |
|---|---|
| 1️⃣ Receive Input | User message is handed to the gateway. |
| 2️⃣ LLM Generation | LLM produces either a plain response or a function‑call JSON. |
| 3️⃣ Tool Execution | If a function call exists, the gateway routes it to the appropriate tool. |
| 4️⃣ Memory Update | Both the raw LLM output and the tool result are appended to the memory store. |
| 5️⃣ Final Reply | The agent returns the enriched response to the user. |
This deterministic flow guarantees that every piece of data—user query, tool output, and LLM answer—remains traceable, which is essential for debugging and compliance.
For teams that need visual workflow design, the Workflow automation studio lets you drag‑and‑drop these phases, generating the underlying OpenClaw configuration automatically.
7. Full tutorial steps with code snippets
Below is a concise, end‑to‑end checklist you can copy‑paste into your project.
Step 1 – Install dependencies
npm install @openclaw/agent axios dotenvStep 2 – Create .env file
OPENAI_API_KEY=sk-...
OPENWEATHER_API_KEY=your_openweather_keyStep 3 – Build the weather tool (weatherTool.ts)
import axios from 'axios';
export async function getWeather(city: string, apiKey: string) {
const url = `https://api.openweathermap.org/data/2.5/weather?q=${encodeURIComponent(city)}&appid=${apiKey}&units=metric`;
const { data } = await axios.get(url);
return {
city: data.name,
temperature: data.main.temp,
description: data.weather[0].description,
};
}Step 4 – Register the tool in your agent (agent.ts)
import { Agent } from '@openclaw/agent';
import { getWeather } from './weatherTool';
import dotenv from 'dotenv';
dotenv.config();
const weatherTool = {
name: 'weather',
description: 'Provides current weather for a city.',
parameters: {
type: 'object',
properties: {
city: { type: 'string', description: 'City name' },
},
required: ['city'],
},
async execute({ city }) {
return await getWeather(city, process.env.OPENWEATHER_API_KEY!);
},
};
export const myAgent = new Agent({
tools: [weatherTool],
// reuse memory & gateway from Part 1
});Step 5 – Hook the agent to a simple HTTP endpoint
import express from 'express';
import { myAgent } from './agent';
const app = express();
app.use(express.json());
app.post('/chat', async (req, res) => {
const { message } = req.body;
const reply = await myAgent.handleMessage(message);
res.json({ reply });
});
app.listen(3000, () => console.log('Server running on http://localhost:3000'));Run node dist/server.js (or npm run dev if you use a bundler) and test with curl:
curl -X POST http://localhost:3000/chat -H "Content-Type: application/json" -d '{"message":"What’s the weather in Tokyo?"}'You should receive a JSON payload containing a friendly sentence with the temperature and description, e.g.:
{ "reply": "The weather in Tokyo is 22°C with scattered clouds." }That’s it—your OpenClaw agent now talks to a live API, remembers the interaction, and can be expanded with more tools later.
8. Conclusion and next steps
We’ve demonstrated how a single, well‑structured tool can be woven into the OpenClaw ecosystem. The key takeaways are:
- Tool contracts keep the agent‑tool boundary clean.
- Memory automatically records tool results, enabling multi‑turn reasoning.
- Gateway centralises communication, making it easy to add cross‑cutting concerns.
- Agent Framework orchestrates everything in a deterministic pipeline.
Ready for the next challenge? Consider adding a ElevenLabs AI voice integration so your agent can speak the weather forecast aloud, or explore the UBOS templates for quick start that already bundle weather‑API calls with UI components.
Stay tuned for Part 3, where we’ll dive into dynamic tool discovery and show how to let the agent decide which external service to call based on user intent.