- Updated: March 11, 2026
- 6 min read
Exploring Plan Space through Conversation: An Agentic Framework for LLM-Mediated Explanations in Planning
Direct Answer
The paper introduces Goal‑Conflict Explanation Planning (GCEP), a novel agentic framework that lets large language models (LLMs) generate, evaluate, and present explanations for planning problems where multiple objectives clash. By turning abstract goal conflicts into concrete, user‑friendly narratives, GCEP bridges the gap between autonomous planning agents and human decision‑makers, making AI‑driven plans transparent and actionable.
Background: Why This Problem Is Hard
Modern AI systems increasingly rely on LLMs to orchestrate complex tasks—ranging from supply‑chain optimization to autonomous vehicle routing. These tasks often involve conflicting goals (e.g., minimizing cost while maximizing safety). Existing planning pipelines typically output a single plan or a set of Pareto‑optimal solutions, leaving users to infer the trade‑offs themselves. This opacity creates several bottlenecks:
- Interpretability Gap: Stakeholders cannot easily understand why a plan favors one objective over another.
- Decision Fatigue: Presenting raw numerical trade‑offs forces users to perform mental calculations, slowing down critical decisions.
- Trust Deficit: Without clear rationales, users are reluctant to delegate authority to autonomous agents.
Traditional approaches to explainable AI (XAI) focus on post‑hoc feature attribution or rule extraction, which do not capture the dynamic interplay of multiple planning goals. Moreover, most LLM‑based planners treat explanation as an afterthought, generating generic text that lacks alignment with the underlying optimization logic. The result is a mismatch between the planner’s internal reasoning and the user’s mental model.
What the Researchers Propose
GCEP reframes explanation generation as an integral part of the planning loop. The framework consists of three tightly coupled components:
- Goal Conflict Analyzer (GCA): Detects and formalizes conflicts among user‑specified objectives using constraint‑based reasoning.
- Explanation Synthesizer (ES): Leverages an LLM to translate the abstract conflict representation into natural‑language narratives that highlight why certain goals dominate or are compromised.
- Plan Executor with Feedback Loop (PEFL): Generates candidate plans, evaluates them against the conflict model, and iteratively refines both the plan and its explanation based on user feedback.
Crucially, the GCA produces a conflict graph that the ES consumes, ensuring that explanations are grounded in the same logical structure that guides plan synthesis. This tight coupling eliminates the “explanation drift” problem where the narrative diverges from the actual decision logic.
How It Works in Practice
The end‑to‑end workflow can be visualized as a cyclical process:
- User Input: The user defines a set of objectives (e.g., cost, latency, energy consumption) and any hard constraints.
- Conflict Detection: The GCA parses the objectives, constructs a weighted constraint graph, and flags pairs of goals that cannot be simultaneously optimized.
- Plan Generation: The PEFL runs a planner (e.g., mixed‑integer programming, reinforcement learning) to produce an initial plan that satisfies hard constraints while balancing soft goals.
- Explanation Drafting: The ES receives the conflict graph and the tentative plan, prompting the LLM with a structured template: “Explain why the plan prioritizes Goal A over Goal B given the following trade‑off.”
- User Review & Feedback: The user reads the explanation, adjusts preferences (e.g., re‑weighting goals), or requests alternative scenarios.
- Iterative Refinement: The PEFL re‑optimizes the plan using the updated preferences, and the ES regenerates a revised explanation. The loop continues until the user approves the plan‑explanation pair.
What sets GCEP apart is the bidirectional influence between planning and explanation. Rather than treating explanations as static outputs, the framework lets user feedback on the narrative directly reshape the optimization objective, creating a more collaborative planning experience.
Evaluation & Results
The authors evaluated GCEP across two domains:
- Urban Logistics: Routing delivery trucks while balancing fuel cost, delivery time windows, and emissions caps.
- Cloud Resource Allocation: Assigning compute instances to workloads with competing goals of latency, monetary cost, and energy efficiency.
For each domain, they conducted a user study with 48 participants (mix of engineers, product managers, and domain experts). Participants were asked to compare three conditions:
- Baseline planner with no explanation.
- Planner with generic, post‑hoc explanations.
- GCEP‑enabled planner with conflict‑aware explanations.
Key findings include:
| Metric | Baseline | Generic Explanation | GCEP Explanation |
|---|---|---|---|
| Decision Time (seconds) | 84 | 62 | 38 |
| Plan Acceptance Rate | 57 % | 71 % | 89 % |
| Perceived Trust (1‑5 Likert) | 2.8 | 3.4 | 4.6 |
Beyond quantitative metrics, qualitative feedback highlighted that participants felt “the explanations made the trade‑offs explicit” and “allowed them to steer the optimizer without needing deep technical knowledge.” The study demonstrates that grounding explanations in the actual conflict model not only speeds up decisions but also improves user confidence in autonomous agents.
Why This Matters for AI Systems and Agents
GCEP addresses a core limitation of current LLM‑mediated agents: the inability to articulate why a chosen action aligns with—or sacrifices—specific business objectives. For practitioners building AI‑driven decision platforms, the framework offers several practical benefits:
- Transparent Orchestration: By exposing the conflict graph, system architects can audit and debug the reasoning pipeline, satisfying compliance requirements.
- Human‑in‑the‑Loop Efficiency: Explanations act as a concise “decision brief,” reducing the cognitive load on operators and enabling faster approvals.
- Adaptive Goal Management: The feedback loop lets users re‑prioritize goals on the fly, supporting dynamic business environments where priorities shift rapidly.
- Scalable Explainability: Because the Explanation Synthesizer reuses a single LLM prompt template, the approach scales across domains without bespoke rule‑based explainers.
Organizations looking to embed trustworthy AI into their workflow can integrate GCEP with existing orchestration platforms. For example, ubos.tech’s agent marketplace can host a GCEP‑enabled planner as a plug‑and‑play service, while the orchestration layer can manage the feedback cycles and persist conflict graphs for audit trails.
What Comes Next
While GCEP marks a significant step forward, the authors acknowledge several open challenges:
- Scalability of Conflict Graphs: In domains with hundreds of objectives, the graph can become dense, potentially overwhelming both the planner and the LLM.
- Multi‑Agent Coordination: Extending GCEP to scenarios where multiple autonomous agents negotiate conflicting goals remains an open research avenue.
- Robustness to Ambiguous Preferences: Users may provide vague or contradictory weightings; future work could incorporate preference elicitation techniques to resolve ambiguity.
Potential future directions include:
- Integrating causal inference to enrich explanations with “what‑if” analyses, helping users understand downstream effects of goal adjustments.
- Coupling GCEP with reinforcement learning agents that learn to anticipate user preference shifts, reducing the number of feedback iterations.
- Deploying the framework in high‑stakes environments such as healthcare scheduling or autonomous fleet management, where explainability is not just a convenience but a regulatory necessity.
Developers interested in experimenting with the framework can explore the open‑source prototype hosted on GitHub and contribute enhancements for multi‑agent scenarios. For enterprises ready to adopt, ubos.tech’s planning suite offers a managed pathway to embed conflict‑aware explanations into existing AI pipelines.
References
For a complete technical description, see the original preprint: Goal‑Conflict Explanation Planning (GCEP).