- Updated: April 5, 2026
- 7 min read
Grammarly’s Expert Review AI Saga: Controversy, Backlash, and Legal Fallout
Grammarly’s “Expert Review” AI—now part of the rebranded Superhuman platform—was pulled after a wave of backlash over unauthorized use of real experts’ names, prompting a public apology, a feature shutdown, and a looming class‑action lawsuit that highlights the urgent need for stronger AI ethics in the creator economy.

In March 2026, the tech community learned that Grammarly’s ambitious Expert Review feature—launched under the new Superhuman brand—had been generating writing suggestions under the names of celebrated authors, scientists, and even journalists without their consent. The controversy erupted after The Verge detailed the saga, sparking a broader conversation about AI‑generated misinformation, privacy rights, and the future of the creator economy.
Background on Grammarly’s Expert Review AI
Grammarly, long known for its grammar‑checking browser extension, announced a strategic pivot in October 2025, rebranding as Superhuman after acquiring the AI‑email platform Superhuman Mail. The new vision promised a “hub of AI agents” that could do more than correct spelling—they would offer domain‑specific advice from “leading professionals, authors, and subject‑matter experts.”
The Expert Review button appeared in the sidebar of the writing assistant. When users clicked it, the AI generated suggestions prefixed with a check‑mark icon and the name of a well‑known figure, such as Stephen King or Neil deGrasse Tyson. A tiny disclaimer claimed that the names did not imply endorsement, but the UI design gave the impression of direct expert involvement.
Internally, the feature relied on large language models (LLMs) fine‑tuned on publicly available articles, books, and academic papers. The idea was to “inspire” users with advice that sounded as if it came from the cited authority, thereby increasing perceived credibility.
Misuse of Expert Names and Backlash
The first public outcry came in early March 2026 when journalists from The Verge discovered that the AI was attributing suggestions to their own names—Nilay Patel, David Pierce, Tom Warren, and Sean Hollister—without any prior permission. The suggestions were often generic, sometimes contradictory to the individuals’ known viewpoints, and occasionally outright nonsensical.
Key examples of unauthorized attributions
- Nilay Patel: The AI suggested “add urgency and intrigue” to headlines, a style Patel has never advocated.
- David Pierce: The model offered “lean, data‑driven copy” advice that conflicted with Pierce’s emphasis on narrative storytelling.
- Tom Warren: A recommendation to “use more emojis” in tech reviews, which Warren has publicly criticized.
- Sean Hollister: Advice to “focus on hardware specs over user experience,” a stance Hollister has repeatedly debated.
The backlash quickly spread across social media, with many experts demanding accountability. The Verge’s outreach to Superhuman’s VP of product, Alex Gay, was met with a vague response that “publicly available works are widely cited,” ignoring the core issue of likeness rights.
Company Response and Feature Shutdown
Within days of the public exposure, Superhuman rolled out an “opt‑out” email inbox for affected experts. However, the move was seen as a band‑aid rather than a solution, as it required experts to manually request removal.
On March 12, 2026, Superhuman announced the immediate disabling of the Expert Review feature. Director of Product Management Ailian Gan issued a statement: “We are reimagining the feature to give experts real control over how they are represented—or not represented at all.” CEO Shishir Mehrotra also posted a public apology on LinkedIn, acknowledging that the feature “misrepresented voices” and promising a more ethical redesign.
The shutdown, while swift, left lingering questions about data provenance, compensation for the use of public content, and the broader responsibility of AI product teams.
Legal Actions and Broader Implications
The same day the feature was pulled, investigative journalist Julia Angwin filed a class‑action lawsuit alleging violations of privacy, publicity rights, and California’s “right of publicity” statutes. The complaint cites the unauthorized use of names as a “commercial exploitation” of personal identity.
Legal experts predict that the case could set a precedent for AI‑generated content that leverages real‑world personas. If the court rules in favor of the plaintiffs, AI developers may be required to obtain explicit consent before using any individual’s name or likeness, even when the underlying data is publicly available.
The saga also underscores a growing tension between rapid AI innovation and existing intellectual‑property frameworks. Companies like Enterprise AI platform by UBOS are already building compliance layers that automatically flag potential likeness violations before a model is deployed.
What This Means for AI Ethics and the Creator Economy
The Expert Review controversy is a textbook case of “extractive AI”—training models on publicly available content and then monetizing the output without fair compensation or attribution. For creators, this raises three urgent concerns:
- Consent: AI platforms must embed consent mechanisms at the data‑collection stage.
- Attribution & Compensation: When an AI model directly references a creator’s expertise, a revenue‑share model should be considered.
- Transparency: Users need clear, front‑and‑center disclosures about the source of AI‑generated advice.
Companies that get these fundamentals right can turn the creator economy into a thriving ecosystem of “AI‑augmented experts.” For instance, AI marketing agents on the UBOS platform allow marketers to train personalized bots that operate under their own brand, with built‑in consent and royalty tracking.
Moreover, the incident highlights the importance of robust Workflow automation studio tools that can enforce ethical guardrails across the entire AI development pipeline—from data ingestion to UI design.
Actionable Takeaways for Tech‑Savvy Professionals
If you’re building or evaluating AI products, consider the following checklist:
- Audit your training data for personally identifiable information (PII) and obtain explicit consent where needed.
- Implement a “name‑use” permission layer that logs every instance of a real‑world name appearing in UI output.
- Provide a simple opt‑out mechanism that works instantly, not via email inboxes.
- Display source links that are verifiable and not behind paywalls.
- Allocate a portion of revenue to the original content creators whose work powers your model.
By following these steps, you can avoid the pitfalls that befell Superhuman and position your product as a trustworthy AI solution.
Conclusion
The Grammarly‑Superhuman Expert Review saga serves as a cautionary tale for every AI startup, SaaS provider, and enterprise looking to embed generative intelligence into user‑facing products. It demonstrates that the allure of “expert‑powered” suggestions can quickly turn into a legal and reputational nightmare when consent and attribution are ignored.
As the AI landscape matures, platforms that prioritize ethical data practices—like the UBOS platform overview—will likely dominate the market. Whether you’re a startup (UBOS for startups), an SMB (UBOS solutions for SMBs), or an enterprise (Enterprise AI platform by UBOS), building trust through transparent AI is no longer optional.
For creators seeking to monetize their expertise responsibly, tools like the AI Article Copywriter or the AI Video Generator illustrate how AI can amplify output while preserving ownership.
Stay informed, demand accountability, and leverage platforms that embed ethical safeguards from day one. The future of AI is bright—provided we build it on a foundation of respect for the very experts who inspire it.