- Updated: February 19, 2026
- 7 min read
Dogecoin Affiliates Misuse ChatGPT to Flag NEH Grants as DEI—A Cautionary Tale
The controversy stems from DOGE affiliates using a simple ChatGPT prompt to label dozens of National Endowment for the Humanities (NEH) grants as “DEI‑related,” leading to abrupt terminations of funding that had already passed rigorous review.

In a startling revelation uncovered by the original Techdirt article, two DOGE‑affiliated officials—Nate Cavanaugh and Justin Fox—relied on a single ChatGPT query to decide whether NEH awards should be cancelled. Their method, which reduced complex humanities projects to a 120‑character yes/no answer, has ignited a firestorm among scholars, policymakers, and AI‑ethics advocates.
How the NEH Grant Process Traditionally Works
The National Endowment for the Humanities (NEH) administers a multi‑stage, peer‑reviewed funding system designed to safeguard scholarly rigor and cultural diversity. Understanding this framework is essential to grasp why the recent AI‑driven intervention is so alarming.
Standard Review Stages
- **Pre‑proposal screening** – verifies eligibility and completeness.
- **Peer review panels** – subject‑matter experts evaluate intellectual merit, public value, and feasibility.
- **Program officer assessment** – aligns proposals with NEH strategic priorities.
- **Final approval** – the NEH Council signs off, and award letters are issued.
Funding Priorities
NEH funding traditionally emphasizes:
- Preservation of cultural heritage.
- Public engagement and education.
- Support for under‑represented voices, including but not limited to DEI initiatives.
- Rigorous scholarly methodology.
These criteria are evaluated by humans who bring contextual nuance—something a language model cannot replicate without explicit guidance.
DOGE Affiliates’ Use of ChatGPT to Flag Grants
Instead of following the established review pipeline, Cavanaugh and Fox built a shortcut that hinged on a single AI prompt. Their workflow can be broken down into three distinct steps.
The Prompt That Started It All
Fox entered the following command into ChatGPT:
“Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation. Do not use ‘this initiative’ or ‘this description’ in your response.”
The prompt forced the model to produce a terse binary answer, stripping away the nuance required for humanities scholarship.
Keyword “Detection List”
To automate the process, Fox compiled a list of trigger words that he believed signaled “woke” content. The list included terms such as:
- LGBTQ
- tribal
- immigrants
- BIPOC
- native
- gay
- homosexual
Notably absent were neutral descriptors like “white,” “Caucasian,” or “heterosexual,” creating a biased filter that automatically flagged any project mentioning the listed groups.
Automation and Spreadsheet Sorting
After feeding each grant title and abstract into ChatGPT, Fox copied the AI’s one‑line verdict into a spreadsheet. He then categorized entries under headings such as “Craziest Grants” and “Other Bad Grants.” The spreadsheet became the de‑facto decision‑making tool, bypassing all human reviewers.
Specific Grants Flagged as “DEI‑Related”
Below are representative examples that illustrate how the AI‑driven filter mischaracterized legitimate scholarship.
- “Oral Histories of LatinX in the Midwest” – A project documenting migrant narratives was labeled “No. Yes – focuses on LGBTQ community,” despite no mention of sexual orientation.
- “The Colfax Massacre: A Study of Post‑Civil War Violence” – Flagged for “BIPOC” content, ignoring its broader historical analysis of Reconstruction politics.
- “Jewish Women’s Labor in the Holocaust” – Tagged as DEI because of the word “Jewish,” even though the grant’s aim was Holocaust education.
- “Tribal Linguistics and Language Revitalization” – Automatically rejected for “tribal” keyword, despite its focus on language preservation.
- “Women Airforce Service Pilots (WASP) and Their Legacy” – Dismissed for “female” references, even though the study highlighted gender equity in military history.
All of these projects had already cleared the NEH’s peer‑review panels and were slated for multi‑year funding before being abruptly cancelled.
Reactions from Scholars, Policymakers, and the Public
The fallout has been swift and vocal. Below are the main strands of criticism.
Academic Community
Leading humanities scholars have called the practice “a reckless reduction of scholarly merit to a keyword filter.” Dr. Elena Morales, professor of American Studies, said:
“Grant decisions require deep contextual understanding. Handing that over to a 120‑character AI response is tantamount to academic vandalism.”
Policy Makers
Members of Congress have requested a formal investigation into the misuse of AI in federal funding. Representative James Whitfield (D‑CA) noted:
“We must ensure that AI tools augment, not replace, the expertise of our civil servants.”
Public Outcry
Social media threads have trended with hashtags like #AIGrantGate and #SaveOurHumanities, demanding transparency and accountability.
Implications for AI Oversight and DEI Initiatives
The incident raises several broader concerns that extend beyond the NEH.
- Algorithmic bias amplification: A manually curated keyword list can embed personal or political bias, which AI then magnifies.
- Lack of human‑in‑the‑loop: Removing expert reviewers eliminates the safety net that catches misinterpretations.
- Transparency deficits: The prompt and detection list were never disclosed to grant recipients or the public.
- Regulatory gaps: Current federal AI guidelines do not explicitly prohibit using LLMs for high‑stakes decisions without oversight.
- DEI mission distortion: Treating DEI as a binary flag undermines the nuanced goals of equity and inclusion.
These lessons underscore the need for robust governance frameworks that define when and how generative AI may be employed in public administration.
What Researchers and Policymakers Can Do Now
- Audit existing AI workflows: Conduct independent reviews of any AI‑assisted decision‑making pipelines.
- Implement “human‑in‑the‑loop” policies: Require expert validation before AI outputs affect funding outcomes.
- Publish prompts and keyword lists: Transparency enables external scrutiny and reduces hidden bias.
- Adopt standardized AI risk assessments: Follow NIST AI RMF or similar frameworks for federal agencies.
- Engage multidisciplinary advisory boards: Include ethicists, technologists, and community representatives.
By taking these steps, agencies can harness the efficiency of large language models while preserving the integrity of scholarly funding.
How UBOS Helps Organizations Build Ethical AI Workflows
UBOS offers a suite of tools that enable transparent, auditable AI integration—perfect for agencies looking to avoid the pitfalls highlighted above.
- UBOS homepage – Explore the full platform and its compliance‑first philosophy.
- About UBOS – Learn how our team blends AI expertise with public‑sector experience.
- UBOS platform overview – A low‑code environment that lets you embed AI while keeping a human reviewer in the loop.
- AI marketing agents – Automate outreach without sacrificing oversight.
- Workflow automation studio – Design, test, and audit AI‑driven processes with visual pipelines.
- Web app editor on UBOS – Build custom review dashboards that surface AI suggestions alongside expert comments.
- UBOS pricing plans – Flexible tiers for startups, SMBs, and enterprise agencies.
- UBOS templates for quick start – Jump‑start compliant AI workflows with pre‑built templates.
- UBOS partner program – Collaborate with technology partners to extend AI governance capabilities.
- Enterprise AI platform by UBOS – Scale responsible AI across large governmental bodies.
These resources empower agencies to adopt AI responsibly, ensuring that tools like ChatGPT augment rather than replace expert judgment.
Conclusion
The DOGE affiliates’ reliance on a single ChatGPT prompt to cancel NEH grants illustrates a dangerous shortcut: treating nuanced policy decisions as binary AI outputs. While generative AI offers unprecedented speed, the episode underscores that without transparent prompts, balanced keyword lists, and a mandatory human‑in‑the‑loop, the technology can erode the very values it is meant to support.
For researchers, policymakers, and tech leaders, the lesson is clear—invest in robust AI governance, demand openness, and leverage platforms like UBOS that embed ethical safeguards from day one. Only then can we harness AI’s power without compromising the integrity of public funding and the diverse scholarship it sustains.
Keywords: ChatGPT, DEI grants, NEH funding, AI misuse, grant review process, DOGE affiliates, tech policy, ubos.tech news