- Updated: March 11, 2026
- 7 min read
Why Not? Solver-Grounded Certificates for Explainable Mission Planning – Technical Overview
Direct Answer
The paper “Why Not? Solver‑Grounded Certificates for Explainable Mission Planning” introduces a framework that extracts explanations directly from the optimization model used to schedule Earth‑observation satellites. By generating “solver‑grounded certificates” – minimal infeasible subsets, tight constraint sets, and inverse solves – the approach guarantees that every justification is faithful to the underlying solver, eliminating the non‑causal and incomplete explanations that plague post‑hoc methods.
Background: Why This Problem Is Hard
Satellite operators must constantly decide which imaging requests to accept, reject, or modify. Each decision is the result of a complex mixed‑integer program that balances dozens of constraints: orbital dynamics, power budgets, downlink windows, and customer priorities. The difficulty lies in two intertwined aspects:
- Opacity of the optimizer: Modern solvers explore millions of candidate solutions before converging. The final schedule is a black box to the human operator.
- Multi‑cause rejections: A single request may be infeasible because of a conjunction of constraints (e.g., insufficient power *and* a conflicting downlink). Explaining “why” therefore requires identifying the exact subset of constraints that together block the request.
Existing approaches try to retrofit explanations after the fact. They typically attach a reasoning layer that looks at the final schedule, extracts feature importance, or runs a separate heuristic to guess the cause. This leads to three systemic problems:
- Non‑causal attributions: The post‑hoc layer may point to a constraint that was not actually active in the solver’s search path.
- Missing conjunctions: When multiple constraints jointly cause infeasibility, the explanation often cites only the most obvious one, hiding the true root cause.
- Instability across runs: Small changes in random seeds or solver heuristics can produce wildly different explanations, eroding trust.
For mission‑critical domains like Earth observation, where operators must justify decisions to customers and regulators, these shortcomings are unacceptable.
What the Researchers Propose
The authors present a “solver‑grounded certificate” framework that treats the optimizer itself as the source of truth for explanations. The key idea is to interrogate the constraint model at the point where the solver declares infeasibility or optimality, and to extract minimal, provably sufficient subsets of constraints that explain the outcome.
Core Components
- Certificate Generator: A lightweight module that hooks into the solver’s conflict analysis routine to retrieve minimal infeasible subsets (MIS) when a request is rejected.
- Constraint Tightener: For accepted requests, it identifies the “tight” constraints that limit the solution space and computes contrastive trade‑offs (e.g., “if you relax power by 5 %, you could also accept request X”).
- Inverse Solver Engine: Handles “what‑if” queries by solving a modified optimization problem that forces a previously rejected request to be feasible, then reports the minimal changes required.
All three components operate on the same mathematical model, ensuring that explanations are always consistent with the solver’s internal logic.
How It Works in Practice
The workflow can be visualized as a loop that runs every time the scheduling system processes a batch of orders. Figure 1 (illustrated below) shows the data flow.

Step‑by‑step Process
- Input ingestion: The system receives a set of imaging orders, each annotated with priority, time window, and resource requirements.
- Optimization run: A mixed‑integer programming solver attempts to produce a feasible schedule that maximizes a weighted objective.
- Outcome classification: For each order, the solver either includes it (selected) or excludes it (rejected).
- Certificate extraction:
- If rejected, the Conflict Analyzer returns the smallest set of constraints that together make the order infeasible (MIS).
- If selected, the Tightness Analyzer reports which constraints are binding and quantifies how much slack is left.
- What‑if handling: Operators can pose a “could we accept order Y if we change X?” query. The Inverse Solver re‑optimizes with the target order forced in, then reports the minimal adjustments (e.g., shift a downlink window by 3 minutes).
- Presentation layer: The certificates are rendered as human‑readable bullet points, tables, or visual overlays on the schedule UI, giving operators a clear, causal narrative.
What distinguishes this approach from post‑hoc methods is that the explanations are derived from the same constraint matrix that the optimizer uses, not from an external heuristic. Consequently, the certificates are provably sound (they truly block the request) and stable (re‑running the solver with a different seed yields the same certificate).
Evaluation & Results
The authors evaluated the framework on a realistic satellite scheduling benchmark that features 30 low‑Earth‑orbit satellites and up to 200 imaging orders per batch. Three dimensions were measured:
Soundness
Every certificate was cross‑checked against the solver’s constraint model. In 15 out of 15 sampled rejections, the cited constraints were indeed the minimal infeasible subset, yielding a perfect soundness score.
Counterfactual Validity
For 7 “what‑if” scenarios, the inverse solves produced the exact minimal changes needed to make a rejected order feasible. All 7 predictions held true when the modified schedule was re‑run, confirming that the certificates are not merely plausible but actionable.
Stability
The authors paired 28 random seeds and measured the Jaccard similarity of the resulting certificate sets. The similarity was 1.0 across all pairs, indicating that the explanations are deterministic with respect to the underlying model.
Scalability
A scalability study increased the problem size to 200 orders and 30 satellites. Certificate extraction time grew linearly, staying under 2 seconds per batch – well within operational limits for ground‑segment planning cycles.
In contrast, a strong post‑hoc baseline missed at least one constraint in every multi‑cause rejection and produced non‑causal attributions in 29 % of cases, underscoring the practical advantage of the solver‑grounded approach.
Why This Matters for AI Systems and Agents
Explainability is a cornerstone for trustworthy autonomous agents, especially in high‑stakes domains like space operations. Solver‑grounded certificates provide several concrete benefits for AI practitioners:
- Trustworthy decision audit trails: Operators can present regulators with mathematically verified justifications, reducing compliance risk.
- Improved human‑in‑the‑loop workflows: Clear, causal explanations enable faster triage of rejected orders and more informed negotiation with customers.
- Facilitated debugging of optimization models: When a schedule behaves unexpectedly, the minimal infeasible subsets pinpoint modeling errors or overly tight constraints.
- Reusable explanation modules: The certificate generators can be wrapped as micro‑services, allowing any AI‑driven mission planner to plug in explainability without redesigning the core optimizer.
For developers building autonomous scheduling agents, the framework offers a template for “explain‑by‑design” architectures: embed the explanation logic alongside the decision engine rather than treating it as an afterthought. This aligns with emerging best practices in AI governance and could be extended to other domains such as logistics, manufacturing, or autonomous vehicle routing.
Read more about how explainable AI is reshaping mission‑critical workflows on our mission planning platform.
What Comes Next
While the results are compelling, the authors acknowledge several avenues for further research:
- Dynamic environments: Extending certificates to handle real‑time disruptions (e.g., sudden satellite anomalies) where the constraint set evolves on the fly.
- Multi‑objective trade‑offs: Incorporating economic or risk metrics into the contrastive analysis to answer “what if we prioritize revenue over coverage?”
- Human‑centric presentation: User studies to determine the most effective visual and textual formats for operators under time pressure.
- Cross‑solver portability: Adapting the certificate extraction to other solver families (e.g., CP‑SAT, stochastic programming) to broaden applicability.
Beyond satellite scheduling, the same principles could be applied to any AI system that relies on combinatorial optimization, such as supply‑chain orchestration or cloud resource allocation. By grounding explanations in the solver, developers can ensure that AI agents remain both performant and accountable.
Explore upcoming research collaborations and open‑source tooling on our AI orchestration hub.