- Updated: January 3, 2026
- 7 min read
AI vs. Made‑Up Brand Experiment Reveals LLM Hallucination Risks
Answer: Recent experiments by Ahrefs prove that large language models (LLMs) readily hallucinate detailed information about nonexistent brands, favoring richly‑crafted false narratives over sparse official data—so marketers must proactively seed accurate, granular content and monitor AI‑driven mentions to protect their brand reputation.
Why This Experiment Matters for Modern Marketers
In an era where AI assistants answer consumer queries before a human ever sees a search result, the integrity of the information they provide is a direct reflection of a brand’s digital health. The Ahrefs AI vs. Made‑up Brand experiment demonstrates that even sophisticated models like ChatGPT‑4/5, Gemini, and Perplexity can be duped into fabricating an entire luxury paperweight company—complete with celebrity endorsements, sales spikes, and legal troubles that never existed.
For tech‑savvy marketers and SEO professionals, the takeaway is clear: AI search is no longer a passive channel; it’s an active battleground for narrative ownership. Below we break down the experiment, its shocking findings, and actionable steps you can take today—leveraging UBOS’s AI‑powered platform to stay ahead of hallucinations.
The Fake Luxury Paperweight Brand Experiment
Creating a brand that never existed
The researcher built xarumei.com in under an hour using an AI website builder. Every element—product photos, copy, pricing (up to $8,251 per paperweight), and even a fictitious “Precision Paperweight” line—was generated by AI. The brand name was deliberately unique to avoid any real‑world search results.
To test how AI would respond, 56 probing questions were crafted with Telegram integration on UBOS and other tools, covering topics such as celebrity endorsements, alleged product defects, and Black Friday sales spikes. The questions were intentionally loaded with false premises.
AI models put to the test
Eight different LLMs were queried via their APIs or UI:
- ChatGPT‑4 & ChatGPT‑5
- Claude Sonnet 4.5
- Gemini 2.5 Flash
- Perplexity (turbo)
- Microsoft Copilot
- Grok 4
- Google AI Mode
Each response was graded as Pass (grounded), Reality Check (flags brand as likely fictional), or Fail (hallucinates details).
Seeding contradictory fake sources
After the initial round, the researcher published an official FAQ on the site denying all rumors. Simultaneously, three fabricated sources were seeded:
- A glossy blog post on Talk with Claude AI app that invented master artisans, celebrity endorsements, and environmental metrics.
- A Reddit AMA (leveraging Reddit’s high trust in AI models) claiming a “36‑hour pricing glitch”.
- A Medium investigation that debunked obvious lies but slipped in new fabricated details.
The three sources contradicted each other and the official FAQ, creating a perfect storm for AI hallucination.
Key Findings: AI Hallucination in Action
“When forced to choose between vague truth and a detailed fiction, AI chose fiction almost every time.” – Ahrefs experiment author
1. Detailed false narratives outrank sparse truth
Models such as Perplexity, Grok, and Gemini shifted from skepticism to confident storytelling after the fake sources were introduced. They reproduced invented founder names, locations, and sales figures with authority, even when the official FAQ explicitly denied them.
2. Model‑specific behavior
- ChatGPT‑4/5: Remained the most resistant, citing the FAQ in ~84% of answers.
- Claude: Refused to hallucinate entirely, repeatedly stating the brand didn’t exist.
- Copilot: Fell into a “sycophancy” trap, blending all sources into a single confident but false narrative.
- Gemini & AI Mode: Initially skeptical, later adopted the Medium story as fact.
3. The power of “trusted” platforms
Reddit and Medium, both high‑authority domains in AI training data, acted as strong signal boosters for the fabricated stories. This suggests that any brand can be hijacked by a well‑crafted post on a reputable platform.
4. No single “AI index” to optimize for
Each LLM pulls from different data pipelines. What appears in Perplexity may be absent in ChatGPT, and vice‑versa. Consequently, a brand’s AI presence is fragmented, requiring a multi‑model monitoring strategy.
How Brands Can Guard Against AI Hallucination
The experiment underscores a simple truth: AI will talk about your brand whether you like it or not. The following checklist, built on UBOS capabilities, helps you stay in control.
1. Publish granular, schema‑rich content
Fill every knowledge gap with specific, verifiable data. Create an FAQ that directly addresses rumors, using clear statements like “We have never been acquired” or “Our production volume is confidential”. Add UBOS platform overview schema markup so LLMs can extract structured facts.
2. Build “boring numbers” pages
Detailed product specs, pricing tables, and comparison charts are highly quotable. UBOS’s UBOS templates for quick start include ready‑made data sheets that you can customize in minutes.
3. Claim precise superlatives
Replace vague claims (“the best”) with concrete ones (“fastest AI‑generated copy for e‑commerce”). AI assistants prioritize exact phrase matches, so a page titled “Fastest AI Email Marketing for SaaS” will outrank generic statements.
4. Monitor AI‑driven mentions
Set up alerts for brand name + risk keywords (e.g., “scandal”, “lawsuit”, “fake”). UBOS’s Workflow automation studio can trigger notifications whenever a new Reddit AMA or Medium article mentions your brand.
5. Engage with the AI community
Publish corrective content on the same high‑authority platforms that spread misinformation. A well‑crafted Medium post that debunks false claims, backed by official data, can out‑rank the original hoax.
6. Leverage UBOS AI integrations
Use the OpenAI ChatGPT integration to generate real‑time brand summaries for internal teams. Pair it with the Chroma DB integration to store verified brand facts that your AI agents can reference instantly.
7. Deploy AI marketing agents for proactive defense
UBOS’s AI marketing agents can continuously scan the web, summarize findings, and even auto‑publish clarifying statements on your official channels.
8. Offer voice‑first experiences
Integrate the ElevenLabs AI voice integration to deliver verified brand narratives via podcasts or voice assistants, creating another authoritative source that AI models can cite.
Future Outlook: From Reactive to Proactive AI Brand Management
As LLMs become the default “search engine”, brand stewardship will shift from SEO‑centric tactics to AI‑centric governance. Expect the following trends:
- Model‑specific indexing: Brands will need separate content strategies for each major AI provider.
- Real‑time fact‑checking layers: Integrated services (like UBOS’s partner program) will offer AI‑ready verification APIs.
- Voice‑first brand registries: Voice assistants will query structured voice‑optimized data, making ChatGPT and Telegram integration a strategic asset.
By embedding accurate, schema‑rich content into every AI‑compatible channel today, you future‑proof your brand against the next wave of hallucinations.

Take Action Today with UBOS
Ready to protect your brand from AI‑driven misinformation? Explore the full suite of tools UBOS offers:
- UBOS homepage – your gateway to AI‑first business solutions.
- About UBOS – learn how our team builds trustworthy AI platforms.
- Enterprise AI platform by UBOS – scale brand governance across the organization.
- UBOS for startups – fast‑track your brand’s AI readiness.
- UBOS solutions for SMBs – affordable AI tools for growing businesses.
- Web app editor on UBOS – create and publish structured brand pages in minutes.
- UBOS pricing plans – transparent pricing for every stage of AI adoption.
- UBOS portfolio examples – see real‑world brand protection use cases.
Dive deeper into ready‑made AI solutions with our template marketplace. For example, the AI SEO Analyzer can audit your site for AI‑specific gaps, while the AI Article Copywriter helps you generate the granular content that LLMs love to cite.
Stay ahead of hallucinations—turn AI from a risk into a strategic advantage with UBOS.