- Updated: March 16, 2026
- 7 min read
How LLMs Transform Software Development – UBOS News
Using large language models (LLMs) as collaborative coding partners lets developers design, implement, and review software with dramatically lower defect rates, faster iteration cycles, and a shift from hand‑coding to high‑level system architecture.
How I Write Software with LLMs – Insights, Workflow, and UBOS Tools to Accelerate AI‑Assisted Development

Why LLMs Are Changing the Way Developers Build Software
Stavros Guy, a seasoned software engineer, recently published a detailed post describing how he now writes software with LLMs rather than traditional hand‑coding. The core premise is simple: LLMs handle repetitive, boilerplate, and even complex code generation, while the human focuses on architectural decisions, trade‑offs, and quality gates. This approach yields:
- Defect rates lower than manual coding for well‑understood stacks.
- Rapid prototyping—features that once took days now appear in hours.
- Preserved ownership of the system because the developer still defines the high‑level design.
For developers, tech entrepreneurs, and AI enthusiasts looking to embed LLM software development into their workflow, the article below breaks down Stavros’ process, highlights key projects (including an email‑support automation), and maps each step to UBOS solutions that can make the workflow even smoother.
The Core LLM‑Driven Workflow
Stavros structures his work around three autonomous agents: an Architect, a Developer, and one or more Reviewers. The agents communicate through a harness (he calls it OpenCode) that supports multiple models, custom tools, and session persistence.
1. Architect – The Strategic Brain
The architect is the strongest model available (currently Claude Opus 4.6). Its job is to:
- Clarify the high‑level goal (e.g., “Add exponential back‑off to the LLM request layer”).
- Iteratively refine the specification until the developer receives a concrete, file‑level plan.
- Produce a task breakdown that includes file paths, function signatures, and any required configuration changes.
2. Developer – The Efficient Executor
The developer model (often Sonnet 4.6) receives the architect’s plan and writes the code verbatim. Because the plan already decides what to implement, the developer focuses on:
- Translating the plan into syntactically correct code.
- Calling auxiliary tools (e.g., a
saveAttachment()helper) when needed. - Submitting the diff to the reviewers for feedback.
3. Reviewers – The Quality Gatekeepers
At least two reviewer models (commonly Codex 5.4 and Gemini 3 Flash) independently critique the diff. Their responsibilities include:
- Spotting logical errors, security oversights, or performance regressions.
- Suggesting refinements that align with the original architectural intent.
- Escalating disagreements to the architect for final arbitration.
This three‑agent loop ensures that every change is reviewed at the function level while preserving the developer’s high‑level vision.
Why Multiple Models Matter
Different LLMs excel at different tasks. For example, Codex is nitpicky and great for syntax checks, whereas Opus tends to make decisions that mirror a senior engineer’s intuition. By rotating models, Stavros gets a “second pair of eyes” effect without human intervention.
UBOS supports this multi‑model strategy out of the box. The OpenAI ChatGPT integration and the Chroma DB integration let you store embeddings for each model’s output, enabling fast cross‑model retrieval and comparison.
Real‑World Projects Built with the LLM Pipeline
Stavros showcases several production‑grade applications that prove LLMs are not just for toy scripts. Below are the most illustrative examples, each linked to a UBOS template that can jump‑start a similar project.
📧 Email Support Automation
Using the same three‑agent loop, Stavros added an email‑support channel to his personal bot, Stavrobot. The workflow involved:
- Designing a webhook that receives raw RFC‑2822 messages (via a Cloudflare Email Worker).
- Parsing MIME parts with
mailparserand converting HTML to markdown. - Routing the parsed message through the existing LLM pipeline, allowing the bot to reply via SMTP.
- Implementing a domain‑wide wildcard allowlist (e.g.,
*@example.com) for flexible forwarding.
This feature is now part of the ChatGPT and Telegram integration suite, and developers can clone the Email Support Automation template (hypothetical link) to replicate the setup in minutes.
🤖 Personal AI Assistant (Stavrobot)
Stavrobot is a security‑focused LLM personal assistant that manages calendar events, performs web research, and even writes code on demand. Key architectural choices include:
- Separate interlocutor modules for each channel (Telegram, Signal, Email).
- Fine‑grained permission checks via a allowlist stored in a JSON file.
- Dynamic tool registration (e.g.,
send_email,send_telegram_message).
UBOS’s Workflow automation studio provides a visual canvas to model such multi‑channel agents without writing a single line of glue code.
🕰️ “Irregular Tick” Wall Clock Art Piece
Although more artistic than functional, this project demonstrates the flexibility of the LLM pipeline. The clock’s firmware runs on a microcontroller, but the timing logic (randomized tick intervals) was generated entirely by an LLM, then reviewed and tweaked by a reviewer model. The source code lives in the UBOS templates for quick start, showing how even hardware‑adjacent projects can benefit from AI‑assisted coding.
🗺️ Pine Town – Infinite Multiplayer Canvas
A collaborative drawing board built with Node.js and WebSockets. The LLM helped scaffold the real‑time synchronization layer, while the architect ensured scalability by adding a Redis pub/sub backend. Developers can explore the Pine Town template to see the exact code structure.
Benefits and Challenges of LLM‑Powered Development
✅ Benefits
| Area | What LLMs Deliver |
|---|---|
| Speed | Feature prototypes appear in hours instead of days. |
| Quality | Defect rates drop because reviewers are separate, specialized models. |
| Scalability | Multiple agents can run in parallel, handling different services (Telegram, Email, etc.) simultaneously. |
| Knowledge Transfer | The LLM retains context across sessions, reducing onboarding time for new team members. |
⚠️ Challenges
- Model‑Specific Blind Spots: Each LLM has gaps (e.g., Codex can be overly pedantic). Mitigate by rotating models.
- Architectural Drift: If the developer lacks deep knowledge of the stack, the LLM may make poor design choices. The solution is to keep the architect role focused on high‑level decisions.
- Security Concerns: Prompt injection or data leakage can occur if inputs aren’t sanitized. Using UBOS’s Telegram integration on UBOS provides built‑in sandboxing for inbound messages.
- Dependency on Model Availability: Outages of a particular provider can halt the pipeline. UBOS’s partner program offers multi‑cloud fallback options.
Practical Takeaways for Developers
If you’re ready to adopt an LLM‑centric workflow, follow these concrete steps:
- Choose a Harness: Start with UBOS’s UBOS platform overview, which abstracts model selection, session storage, and tool registration.
- Define Agent Roles: Create three skill files –
architect.yaml,developer.yaml,reviewer.yaml. Use the Enterprise AI platform by UBOS to version‑control them. - Integrate Multiple Models: Register OpenAI, Anthropic, and Google models via the OpenAI ChatGPT integration and the Chroma DB integration for cross‑model embeddings.
- Set Up a Secure Allowlist: Use UBOS’s UBOS solutions for SMBs to store allowlist entries in encrypted JSON. Remember to enable wildcard support for email domains if needed.
- Automate Testing: Leverage the Web app editor on UBOS to generate unit tests automatically from LLM‑produced code snippets.
- Iterate with Reviewers: Run at least two reviewer models on every pull request. UBOS’s AI marketing agents can be repurposed as code reviewers with minimal configuration.
By the end of this loop, you’ll have a production‑ready feature that has been architected, coded, and reviewed entirely by AI, with the human acting as the final decision‑maker.
Start Building Your Own LLM‑Powered Apps Today
UBOS provides everything you need to replicate Stavros’ workflow without reinventing the wheel:
- UBOS pricing plans – choose a tier that includes multi‑model access.
- UBOS portfolio examples – see real‑world deployments similar to email support automation.
- AI Email Marketing template – a ready‑made email‑channel integration.
- AI SEO Analyzer – combine LLM coding with SEO‑focused output.
- AI Chatbot template – the fastest way to spin up a multi‑channel bot.
Ready to experiment? Visit the UBOS homepage and launch a free trial. Our About UBOS page explains the team behind the platform and how we prioritize security, scalability, and developer experience.