- Updated: March 11, 2026
- 7 min read
From Goals to Aspects, Revisited: An NFR Pattern Language for Agentic AI Systems
Direct Answer
The paper introduces a systematic non‑functional requirements (NFR) pattern language that translates i* goal models into concrete, aspect‑oriented implementations for agentic AI systems. By exposing twelve reusable patterns across security, reliability, observability, and cost‑management, it gives engineers a principled way to modularize cross‑cutting concerns that typically cause production failures.
Background: Why This Problem Is Hard
Agentic AI systems—autonomous assistants, self‑optimizing bots, and multi‑agent orchestration platforms—are increasingly deployed in high‑stakes environments such as finance, healthcare, and critical infrastructure. Their value stems from the ability to reason, plan, and act without human intervention, but that same autonomy creates a dense web of cross‑cutting concerns:
- Security: Prompt injection, sandbox escape, and data leakage are emerging attack vectors unique to language‑model‑driven agents.
- Reliability: Agents must tolerate network partitions, model drift, and unexpected tool failures while preserving task continuity.
- Observability: Debugging an autonomous workflow requires tracing prompts, token usage, and tool invocations across heterogeneous services.
- Cost Management: Token‑based pricing models make uncontrolled generation a direct financial risk.
Current engineering practices treat these concerns as after‑thoughts, sprinkling ad‑hoc checks throughout codebases. Traditional aspect‑oriented programming (AOP) frameworks—originating in Java or .NET—lack constructs for the dynamic, prompt‑driven nature of modern agents. Consequently, teams experience:
- High defect density when security patches collide with core reasoning logic.
- Opaque failure modes that are difficult to reproduce.
- Unpredictable operational costs that erode ROI.
These pain points explain why many AI projects stall before reaching production, despite impressive model performance on benchmark datasets.
What the Researchers Propose
The authors extend the classic “goals‑to‑aspects” methodology—originally demonstrated for i* goal models—to the agentic AI domain. Their contribution is a pattern language consisting of twelve reusable NFR patterns, each mapped to a concrete implementation using an Aspect‑Oriented Programming (AOP) framework built for the Rust language. The language is organized into four categories:
| Category | Pattern Count | Representative Patterns |
|---|---|---|
| Security | 3 | Tool‑Scope Sandboxing, Prompt Injection Detection, Action Authentication |
| Reliability | 3 | Retry‑With‑Backoff, Fault‑Tolerant Tool Wrapper, State Checkpointing |
| Observability | 3 | Prompt Trace Logging, Token Budget Auditing, Action Audit Trail |
| Cost Management | 3 | Token Budget Enforcement, Dynamic Model Selection, Usage‑Based Throttling |
Four of the patterns address concerns that are absent from traditional AOP literature:
- Tool‑Scope Sandboxing: Isolates external tool calls (e.g., web APIs, database adapters) so that a compromised tool cannot affect the agent’s core reasoning loop.
- Prompt Injection Detection: Monitors incoming prompts for adversarial patterns that attempt to hijack the agent’s goal hierarchy.
- Token Budget Management: Enforces a hard limit on token consumption per task, preventing runaway costs.
- Action Audit Trails: Records every tool invocation and model output in an immutable log for post‑mortem analysis.
At the conceptual level, the methodology follows three steps:
- Goal Modeling: Engineers capture functional goals (e.g., “fetch user data”) and soft‑goals (e.g., “maintain privacy”) in an i* diagram.
- Aspect Discovery: The V‑graph extension identifies which soft‑goals intersect multiple functional goals, flagging them as cross‑cutting.
- Pattern Mapping: Each identified cross‑cutting concern is matched to a concrete pattern, which is then woven into the Rust codebase via the AOP framework.
How It Works in Practice
Implementing the pattern language in a real agentic system involves a straightforward workflow:
- Model the Agent’s Objectives: Using an i* modeling tool, the team defines a hierarchy of goals and soft‑goals. For example, a customer‑support bot may have functional goals like “resolve ticket” and soft‑goals such as “ensure data confidentiality”.
- Run the V‑Graph Analyzer: The analyzer processes the model, producing a graph where nodes represent goals and edges capture dependencies. Cross‑cutting soft‑goals appear as nodes with multiple outgoing edges.
- Select Corresponding Patterns: The analyzer suggests patterns from the language. In our example, “Data Confidentiality” maps to the “Prompt Injection Detection” and “Tool‑Scope Sandboxing” patterns.
- Weave Aspects via Rust AOP: Developers annotate Rust modules with
@aspectdirectives. The AOP compiler generates woven code that injects the selected patterns around the target functions (e.g., before every LLM call, after each tool invocation). - Deploy and Observe: The woven agent runs in production. Observability aspects automatically emit structured logs to a tracing backend, while security aspects enforce sandbox boundaries at runtime.
What distinguishes this approach from generic AOP is the tight coupling between goal semantics and aspect implementation. Rather than manually deciding where to place logging or retry logic, the pattern language derives those decisions from the system’s own goal model, ensuring alignment between business intent and engineering enforcement.
Evaluation & Results
The authors validated the pattern language through a case study on an open‑source autonomous agent framework. The evaluation focused on three research questions:
- RQ1: Can the methodology systematically uncover all relevant cross‑cutting concerns?
- RQ2: Does weaving the patterns reduce defect density compared to a baseline without aspect modularization?
- RQ3: How does the approach impact operational cost predictability?
Key findings include:
- Comprehensive Discovery: The V‑graph analyzer identified 11 out of 12 known soft‑goals in the benchmark model, missing only a deliberately hidden “latency‑aware scheduling” goal, which the authors attribute to insufficient modeling granularity.
- Defect Reduction: In a controlled A/B test, the aspect‑woven version exhibited a 42% lower post‑deployment bug rate (measured by incident tickets) over a three‑month period.
- Cost Predictability: Token‑budget enforcement patterns reduced average token overspend per task from 18% to under 2%, translating to a 30% reduction in monthly cloud‑LLM expenses.
- Observability Gains: Action audit trails enabled root‑cause analysis of 87% of incidents within 15 minutes, compared to 54% for the baseline.
These results demonstrate that a goal‑driven aspect language not only uncovers hidden NFRs but also translates them into tangible reliability and financial benefits. The full experimental details are available in the arXiv paper.
Why This Matters for AI Systems and Agents
For practitioners building production‑grade agents, the pattern language offers a repeatable, evidence‑backed process to embed essential non‑functional guarantees early in the development lifecycle. The practical implications are threefold:
- Accelerated Time‑to‑Market: By automating the discovery of security, reliability, and observability concerns, teams spend less time on ad‑hoc debugging and more on core product features.
- Reduced Operational Risk: Systematic sandboxing and prompt‑injection detection directly address attack surfaces that have caused high‑profile breaches in LLM‑powered assistants.
- Predictable Cost Structure: Token‑budget enforcement aligns financial planning with actual usage, a critical factor for enterprises negotiating large‑scale model contracts.
Organizations that already leverage ubos.tech/agents can integrate the pattern language into their existing Rust‑based pipelines without rewriting business logic. The modular aspects also dovetail with modern observability stacks, feeding structured events into platforms like OpenTelemetry for unified monitoring.
What Comes Next
While the study establishes a solid foundation, several open challenges remain:
- Tooling Maturity: The current V‑graph analyzer is a prototype; production teams will need IDE plugins and CI integration to make goal modeling a seamless part of the workflow.
- Cross‑Language Support: The Rust AOP implementation showcases performance benefits, but many agentic systems are written in Python or TypeScript. Porting the pattern language to those ecosystems is an active research direction.
- Dynamic Goal Evolution: Agents that learn new goals at runtime may outgrow the static i* model. Future work could explore runtime adaptation of aspects based on continuous goal inference.
- Standardization: A community‑driven catalog of NFR patterns could accelerate adoption, similar to design pattern libraries in software engineering.
Addressing these gaps will broaden the impact of goal‑driven aspect engineering beyond the current Rust‑centric niche. Companies interested in pioneering this approach can start by piloting the pattern language on a low‑risk micro‑service and gradually scaling to mission‑critical agents. For guidance on integrating these concepts into your AI stack, reach out via ubos.tech/contact.