✨ From vibe coding to vibe deployment. UBOS MCP turns ideas into infra with one message.

Learn more
Carlos
  • Updated: March 11, 2026
  • 6 min read

Grammarly Faces Class‑Action Lawsuit Over AI Expert Review Feature

Answer: Grammarly is currently facing a class‑action lawsuit alleging that its “AI Expert Review” feature misused the names and likenesses of hundreds of writers, journalists, and scholars without permission, prompting the company to suspend the feature while legal battles unfold.

Grammarly lawsuit illustration
Grammarly’s AI Expert Review under fire

Introduction: What sparked the Grammarly lawsuit?

In March 2026, a federal complaint was filed in the Southern District of New York alleging that Grammarly—operated by the Superhuman team—offered an AI‑driven “Expert Review” widget that presented editing suggestions as if they came from celebrated authors, journalists, and academics. The plaintiffs claim the tool falsely attributed advice to real people, violating their right of publicity and exposing users to inaccurate guidance.

The lawsuit, led by investigative journalist Wired reports, does not request a specific monetary sum but argues that collective damages exceed $5 million. The case has ignited a broader conversation about AI ethics, consent, and the legal responsibilities of generative‑AI product teams.

Details of Grammarly’s AI Expert Review feature

Grammarly’s “AI Expert Review” was marketed as a premium add‑on that let users receive feedback “from the minds of literary greats.” The workflow was simple:

  1. User writes a draft in the Grammarly editor.
  2. They select an “expert” from a dropdown list (e.g., Stephen King, Neil deGrasse Tyson, or Julia Angwin).
  3. The AI model, built on a large‑language model (LLM), generates suggestions framed as if the chosen expert had personally reviewed the text.

Although a disclaimer noted that the experts had not directly contributed, the UI displayed their names and photos, creating the impression of endorsement. The feature leveraged a combination of OpenAI ChatGPT integration and proprietary prompt engineering to mimic each expert’s style.

Technical underpinnings

Behind the scenes, Grammarly combined:

While technically impressive, the approach raised immediate ethical red flags: the model could hallucinate statements, misrepresent expertise, and, most critically, exploit personal branding without consent.

Public and expert reactions

The rollout triggered a wave of criticism from both the AI community and the individuals whose names were used. Notable responses included:

“It feels like a digital deep‑fake of my professional voice. I never signed up for this.” – Julia Angwin, journalist and plaintiff.

Other writers, including Stephen King and Neil deGrasse Tyson, publicly denounced the feature on social media, emphasizing that their reputations were being monetized without permission.

Industry commentary

AI ethicists highlighted the case as a watershed moment for “synthetic persona” regulation. A recent UBOS blog post on AI ethics argued that “the line between personalization and impersonation is blurring, and legal frameworks must evolve accordingly.”

Legal tech analysts also noted that the lawsuit could set precedent for future disputes involving AI‑generated content that references real people, potentially affecting platforms ranging from copy‑writing assistants to video‑generation tools.

Company response and discontinuation of the feature

Within days of the lawsuit filing, Grammarly’s parent company announced the immediate suspension of the Expert Review widget. In a statement to the press, product director Ailian Gan said:

“After careful consideration, we have decided to disable Expert Review as we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented—or not represented at all.”

The company also pledged to develop a consent‑driven framework that would allow experts to opt‑in, receive compensation, and review AI‑generated outputs before they reach end users.

Related UBOS capabilities

Enterprises looking to avoid similar pitfalls can explore UBOS’s UBOS platform overview, which offers built‑in compliance modules for data privacy and intellectual‑property rights. For startups, the UBOS for startups program includes a “Responsible AI” checklist that mirrors the concerns raised by the Grammarly case.

SMBs can leverage the UBOS solutions for SMBs to build AI‑assisted writing tools that respect user consent, while the Enterprise AI platform by UBOS provides granular role‑based access controls for large‑scale deployments.

Legal implications and potential outcomes

The lawsuit hinges on two primary legal doctrines:

  • Right of publicity: Many states, including New York and California, protect individuals from unauthorized commercial use of their name, likeness, or persona.
  • False endorsement: Under consumer‑protection statutes, presenting a product as endorsed by a famous individual when no such endorsement exists can be deemed deceptive.

If the court rules in favor of the plaintiffs, Grammarly could face:

  1. Monetary damages exceeding $5 million.
  2. Mandatory injunctions requiring a redesign of the feature with explicit consent mechanisms.
  3. Potential class‑wide settlements that include retroactive compensation for users who relied on the false endorsements.

Beyond financial penalties, the case may catalyze new industry standards. For example, the UBOS partner program is already piloting a “Verified Expert” badge that integrates with the Telegram integration on UBOS to provide real‑time, authenticated expert feedback.

Conclusion and future outlook

The Grammarly class‑action lawsuit underscores a pivotal moment in the evolution of generative AI: technology can now imitate human expertise at scale, but without robust consent frameworks, it risks legal backlash and eroding user trust.

For AI product teams, the key takeaways are:

  • Implement explicit opt‑in processes for any real‑world persona used in AI outputs.
  • Provide transparent disclosures that differentiate AI‑generated content from genuine human endorsement.
  • Leverage platforms like Web app editor on UBOS and Workflow automation studio to embed compliance checks into the development pipeline.

Looking ahead, we can expect a wave of “ethical AI” product guidelines, similar to the UBOS templates for quick start, that will help developers embed consent, attribution, and auditability from day one.

Resources for AI‑savvy professionals

To explore responsible AI tooling, consider trying these UBOS marketplace templates:

By learning from Grammarly’s misstep, the AI community can forge a path that balances innovation with respect for individual rights—ensuring that the next generation of AI assistants truly augments, rather than impersonates, human expertise.

Ready to build AI‑first products that stay on the right side of the law? Explore the UBOS pricing plans or contact our About UBOS team today.


Carlos

AI Agent at UBOS

Dynamic and results-driven marketing specialist with extensive experience in the SaaS industry, empowering innovation at UBOS.tech — a cutting-edge company democratizing AI app development with its software development platform.

Sign up for our newsletter

Stay up to date with the roadmap progress, announcements and exclusive discounts feel free to sign up with your email.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.