- Updated: March 29, 2026
- 7 min read
Comprehensive Guide to Nanobot AI: Building Autonomous Agent Pipelines

Nanobot AI delivers a lightweight, fully‑featured autonomous agent pipeline that can be assembled, extended, and scheduled with just a few lines of Python, all while leveraging GPT‑4o‑mini, persistent memory, custom skills, sub‑agents, and cron‑style scheduling.
1. Overview of the Nanobot AI Agent Pipeline
The nanobot framework packs seven core subsystems into roughly 4 000 lines of clean Python code. Each subsystem is deliberately isolated (MECE) so developers can replace or augment it without breaking the whole stack.
- User Interfaces: CLI, Telegram, WhatsApp, Discord – all funnel messages into a unified Message Bus.
- Message Bus: Publishes inbound/outbound messages, decoupling the agent core from transport layers.
- Agent Loop: Repeatedly builds context, calls the LLM, and executes tool calls until a final answer is produced.
- Memory Layer: Long‑term
MEMORY.mdand daily journal files keep knowledge across sessions. - Skills Loader: Markdown‑based skill definitions that guide the LLM when to invoke specialized behavior.
- Sub‑Agent Manager: Spawns background workers for parallel tasks.
- Cron Service: Schedules recurring jobs (e.g., memory cleanup) using an APScheduler‑like pattern.
Why it matters for developers
Because every component is a plain Python module, you can drop‑in a custom tool, swap the LLM provider, or connect the pipeline to a UBOS platform overview with a single import.
2. Installation & Secure Setup
Getting started takes three concise steps:
- Install the package and dependencies via PyPI:
pip install -q nanobot-ai openai rich httpx
- Provide your OpenAI API key securely. The tutorial uses
getpassto keep the key in memory only:
import getpass, os
OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
- Bootstrap the workspace – a hidden
.nanobotfolder that storesAGENTS.md,SOUL.md,USER.md, and thememory/directory.
import json, pathlib
HOME = pathlib.Path.home() / ".nanobot"
WORKSPACE = HOME / "workspace"
WORKSPACE.mkdir(parents=True, exist_ok=True)
# write minimal bootstrap files …
All of the above can be wrapped in a single setup.sh script, making the process reproducible for CI pipelines or new team members.
3. The Agent Loop – Core Reasoning Engine
The loop runs up to max_iterations (default 40) and follows a deterministic four‑step cycle:
| Step | What Happens |
|---|---|
| 1️⃣ Context Builder | Collects system prompt, memory snippets, and loaded skills into a single message. |
| 2️⃣ LLM Call | Invokes OpenAI ChatGPT integration (GPT‑4o‑mini by default) with tool schemas. |
| 3️⃣ Tool Execution | If the model returns tool_calls, the framework executes each function, appends the result, and loops. |
| 4️⃣ Final Answer | When the LLM replies with plain text, the loop terminates and returns the answer to the user. |
Memory is injected at every iteration. The MEMORY.md file stores timestamped facts, while daily logs (YYYY‑MM‑DD.md) capture short‑term context. A background consolidation job compresses old entries, keeping the token window lean.
4. Skills & Custom Tools – Extending Agent Capabilities
Nanobot’s Skills Loader reads Markdown files from skills/. Each skill contains:
- A human‑readable description for the LLM.
- Step‑by‑step instructions the model follows when the skill is activated.
- An
always_availableflag that decides whether the skill is loaded on every loop.
Example: a Data Analyst skill that forces the agent to compute mean, median, and outliers before responding.
Custom tools are pure Python functions registered with a JSON schema. Below are three useful additions you can copy into any nanobot project:
# 1️⃣ Roll dice
def roll_dice(num_dice: int = 1, sides: int = 6) -> str:
import random
rolls = [random.randint(1, sides) for _ in range(num_dice)]
return f"Rolled {num_dice}d{sides}: {rolls} (total {sum(rolls)})"
# 2️⃣ Text statistics
def text_stats(text: str) -> dict:
words = len(text.split())
chars = len(text)
sentences = max(text.count('.') + text.count('!') + text.count('?'), 1)
return {"words": words, "characters": chars, "sentences": sentences, "reading_time_min": round(words/200, 1)}
# 3️⃣ Secure password generator
def generate_password(length: int = 16) -> str:
import string, random
pool = string.ascii_letters + string.digits + "!@#$%^&*"
return "".join(random.choice(pool) for _ in range(length))
Once registered, the LLM can call these tools just like built‑in file I/O utilities, enabling richer multi‑step workflows.
5. Sub‑Agents & Cron Scheduling – Parallelism & Automation
Complex projects often need background workers. Nanobot’s Sub‑Agent Manager spawns isolated loops that run up to 15 iterations independently. Each sub‑agent receives its own tool registry (no recursive spawning) and reports results back through the message bus.
Typical use‑cases include:
- Fetching market data while the main agent drafts a report.
- Running a long‑running web‑scrape in the background.
- Generating multiple image variations with AI Image Generator concurrently.
The built‑in Cron Service registers jobs that fire at fixed intervals. A simple “memory cleanup” job might look like this:
from datetime import timedelta
cron_job = {
"name": "memory_cleanup",
"message": "Consolidate my memory files.",
"interval_seconds": 12 * 3600 # every 12 hours
}
# The scheduler injects the message into the Message Bus → agent loop
Because the cron jobs are just messages, you can route them to any UI – for example, a Telegram bot using the Telegram integration on UBOS or a Slack webhook.
6. Practical Demo – From Time Query to Project Plan
The following end‑to‑end scenario showcases every subsystem in action:
- Ask the agent for the current time and a quick calculation.
- Instruct it to write a
project_plan.txtfile with three bullet points. - Tell the agent to remember the project name in long‑term memory.
- Read the file back, verify its contents, and finally summarize the whole workflow.
Result (truncated for brevity):
🕒 Current time: 2026‑03‑29 14:12 UTC
✅ Calculation: 2⁽²⁰⁾ + 42 = 1 048 578
📄 Createdproject_plan.txtwith three bullets.
🧠 Memory saved: “Building a personal AI assistant”.
📄 File content verified.
📚 Summary: The agent queried time, performed a math operation, wrote a file, stored a memory, and confirmed the output—all within 7 loop iterations.
This demo proves that a nanobot instance can act as a full‑stack AI assistant without any external orchestration layer.
7. Leveraging UBOS to Accelerate Nanobot Deployments
While nanobot gives you the engine, UBOS homepage provides the surrounding infrastructure that turns a prototype into a production service:
- Workflow Automation Studio – visually design the agent loop, schedule cron jobs, and monitor sub‑agent health without writing extra code. (Workflow automation studio)
- Web App Editor on UBOS – spin up a low‑code UI that forwards user messages to your nanobot backend. (Web app editor on UBOS)
- AI Marketing Agents – reuse pre‑built templates like AI SEO Analyzer or AI Article Copywriter to generate content for your nanobot’s knowledge base.
- Enterprise AI Platform by UBOS – scale nanobot clusters, enforce role‑based access, and integrate with existing data warehouses. (Enterprise AI platform by UBOS)
- Partner Program – co‑market your custom nanobot extensions and earn revenue shares. (UBOS partner program)
For startups, the UBOS for startups bundle includes free credits for the first 10 k API calls, perfect for testing GPT‑4o‑mini integrations. SMBs can adopt the UBOS solutions for SMBs plan, which bundles the workflow studio, cron scheduler, and a curated set of UBOS templates for quick start (e.g., AI Video Generator or AI Chatbot template).
8. Conclusion & Next Steps
Nanobot AI proves that autonomous agents no longer require massive codebases. By combining a concise Python core with GPT‑4o‑mini, persistent memory, extensible skills, sub‑agents, and cron scheduling, developers can build production‑grade assistants in a single notebook.
Ready to turn the demo into a live service? Follow these three actions:
- Clone the nanobot repository and run the
setup.shscript. - Deploy the backend on UBOS platform overview using the UBOS pricing plans that match your traffic.
- Enhance the agent with a custom skill (e.g., AI LinkedIn Post Optimization) and expose it via the ChatGPT and Telegram integration for real‑time collaboration.
For a deeper dive into the original tutorial, read the MarkTechPost article here. The guide walks through each code block step‑by‑step, but the concepts above give you a higher‑level map you can adapt to any LLM provider.
Start building your autonomous AI assistant today—nanobot gives you the engine, UBOS gives you the runway.