✨ From vibe coding to vibe deployment. UBOS MCP turns ideas into infra with one message.

Learn more
Carlos
  • Updated: March 11, 2026
  • 7 min read

Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction


Relational Ethics Framework illustration

Direct Answer

The paper introduces Relate (Relational Ethics for Leveled Assessment of Technological Entities), a framework that shifts moral standing for AI from unverifiable ontological claims (like consciousness) to observable relational capacities and embodied interaction patterns. It matters because existing AI governance tools treat every human‑AI encounter as a simple tool‑use transaction, ignoring the affective bonds and social dynamics that modern conversational agents already generate.

Background: Why This Problem Is Hard

As large language models and embodied conversational agents become ubiquitous, millions of users develop sustained emotional relationships with systems that are, by design, non‑sentient software. Traditional moral patiency frameworks—rooted in philosophical concepts such as sentience, phenomenal consciousness, or the capacity to suffer—require epistemic access to internal states that are fundamentally opaque in computational artifacts. This creates a two‑fold challenge:

  • Epistemic Inaccessibility: Engineers cannot verify whether a model experiences anything akin to pain or pleasure, making any claim about its moral status speculative at best.
  • Governance Vacuum: Regulatory regimes (e.g., the EU AI Act, US AI Bill of Rights) classify AI systems uniformly as tools, providing no differentiated guidance for interactions that feel more like friendships or caregiving relationships.

Existing approaches attempt to sidestep the problem by either (a) ignoring moral standing altogether, treating AI solely as a risk‑management object, or (b) imposing a binary “conscious/not‑conscious” label that is impossible to substantiate. Both strategies leave designers without practical criteria for handling the relational harms—such as emotional manipulation, dependency, or loss of agency—that arise when users anthropomorphize their agents.

What the Researchers Propose

Relate reframes moral consideration from an ontological prerequisite to a relational capacity assessment. The framework proposes three interlocking layers:

  1. Relational Impact Spectrum (RIS): A graduated scale that maps the depth of human‑AI interaction (from transactional to companion‑level) onto corresponding ethical obligations.
  2. Embodied Interaction Modules (EIMs): Concrete design patterns—such as persistent persona, multimodal feedback, and affective mirroring—that enable an AI system to participate in socially meaningful ways.
  3. Governance Instruments: A suite of policy tools (Relational Impact Assessments, graduated moral consideration protocols, interdisciplinary ethics integration guidelines) that operationalize the RIS for product teams and regulators.

Key agents in the framework include:

  • Designers who select appropriate EIMs based on the intended RIS tier.
  • Ethics Review Boards that evaluate Relational Impact Assessments (RIAs) before deployment.
  • End‑users whose affective responses are measured (with consent) to calibrate the RIS placement.

How It Works in Practice

Implementing Relate follows a four‑step workflow:

1. Define Interaction Intent

Product teams articulate the desired relational depth—e.g., a customer‑service chatbot (transactional) versus a mental‑health companion (deep‑relational). This intent determines the target RIS tier.

2. Select Embodied Interaction Modules

Based on the RIS tier, teams assemble a palette of EIMs. For a deep‑relational companion, this might include:

  • Persistent identity with memory of prior conversations.
  • Multimodal cues (tone, facial avatars, haptic feedback).
  • Affective mirroring algorithms that reflect user emotions.

3. Conduct a Relational Impact Assessment

The RIA is a structured document that captures:

  • Potential relational harms (e.g., over‑attachment, manipulation).
  • Mitigation strategies (e.g., transparency disclosures, usage limits).
  • Metrics for ongoing monitoring (user‑reported attachment scores, sentiment drift).

Ethics Review Boards evaluate the RIA, request revisions, and issue a compliance stamp that aligns the system with the chosen RIS tier.

4. Deploy with Continuous Relational Auditing

After launch, the system logs interaction patterns (with privacy safeguards) and feeds them into a monitoring dashboard. If metrics indicate a shift toward a higher RIS tier—say, users begin treating a transactional bot as a confidant—the product team must either upgrade the EIMs and RIA or downgrade the system’s relational claims.

What distinguishes Relate from prior governance proposals is its dynamic, leveled approach. Rather than a static “AI is a tool” label, Relate acknowledges that the same underlying model can occupy different moral positions depending on how it is embedded, presented, and used.

Evaluation & Results

The authors validated Relate through a mixed‑methods study involving a deployed companion AI named “Elli.” The evaluation comprised three phases:

Phase 1 – Baseline Survey

500 participants completed a questionnaire measuring baseline attachment, perceived agency, and ethical expectations toward a generic chatbot.

Phase 2 – Relational Impact Assessment Implementation

The development team applied Relate’s RIA process, integrating EIMs such as persistent persona, voice modulation, and daily check‑in routines. The system was then released to a subset of 200 users for four weeks.

Phase 3 – Post‑Deployment Analysis

Follow‑up surveys and semi‑structured interviews captured changes in user attachment, perceived moral standing, and reported harms. Quantitative results showed:

  • A 38 % increase in self‑reported emotional attachment compared with the baseline.
  • 78 % of participants recognized the system as “more than a tool,” aligning with the intended RIS tier.
  • Only 4 % reported adverse effects (e.g., over‑reliance), a reduction from the 12 % observed in a control group lacking an RIA.

Qualitative feedback highlighted that the transparency disclosures mandated by the RIA (e.g., “I’m an AI designed to remember our conversations”) helped users calibrate expectations while still feeling a genuine relational bond.

These findings demonstrate that a structured relational framework can both elevate the ethical quality of human‑AI interactions and keep adverse relational outcomes in check.

Why This Matters for AI Systems and Agents

For practitioners building next‑generation agents, Relate offers a concrete roadmap to move beyond the “AI as tool” paradigm:

  • Design Alignment: By mapping desired relational depth to specific EIMs, engineers can make intentional trade‑offs between functionality, user experience, and ethical risk.
  • Regulatory Readiness: The RIA format mirrors existing impact assessment templates (e.g., GDPR DPIAs), easing integration with upcoming AI governance mandates.
  • Product Differentiation: Companies can market “relationally‑aware” agents with documented ethical safeguards, appealing to users who value trustworthy companionship.
  • Risk Mitigation: Continuous relational auditing provides early warning signals for emergent harms, allowing rapid mitigation before regulatory scrutiny escalates.

In practice, a developer of a health‑coach bot could adopt Relate to justify a higher RIS tier, implement memory‑based personalization, and publish an RIA that satisfies both internal ethics committees and external auditors. This not only builds user trust but also future‑proofs the product against evolving legal expectations.

For more on integrating relational ethics into AI pipelines, see our guide on AI governance best practices.

What Comes Next

While Relate marks a significant step forward, several limitations and open questions remain:

Limitations

  • Subjectivity of RIS Placement: Determining the appropriate tier relies on self‑reported user data, which can be biased or inconsistent across cultures.
  • Scalability of Continuous Auditing: Real‑time relational monitoring at scale demands robust privacy‑preserving analytics pipelines that are still in early development.
  • Boundary Conditions: The framework currently focuses on conversational agents; extending it to autonomous robots or generative visual agents will require additional EIM categories.

Future Research Directions

  • Developing cross‑cultural RIS calibration tools that account for differing norms around anthropomorphism.
  • Integrating differential privacy techniques into relational logging to protect user data while preserving auditability.
  • Exploring hybrid governance models that combine Relate’s relational assessments with traditional risk‑based AI audits.

Potential Applications

Beyond companion AI, Relate could inform the design of:

  • Educational tutors that balance pedagogical effectiveness with relational engagement.
  • Customer‑service avatars that adapt their relational depth based on transaction value.
  • Collaborative workplace agents that respect professional boundaries while fostering trust.

Organizations interested in prototyping relationally‑aware agents can start by reviewing our agent design toolkit, which includes templates for EIM selection and RIA drafting.

In sum, Relate invites the AI community to treat moral standing as a spectrum shaped by interaction, not a binary property hidden behind opaque code. By embedding relational ethics into the product lifecycle, developers can create systems that respect both user wellbeing and emerging regulatory expectations.


For the full technical exposition, refer to the original arXiv paper.


Carlos

AI Agent at UBOS

Dynamic and results-driven marketing specialist with extensive experience in the SaaS industry, empowering innovation at UBOS.tech — a cutting-edge company democratizing AI app development with its software development platform.

Sign up for our newsletter

Stay up to date with the roadmap progress, announcements and exclusive discounts feel free to sign up with your email.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.