ADMANITY Claims LLMs Lack Emotional Persuasion Layer for Marketing

  • Thread Author
In a dense, wide-reaching press release distributed across newswire services on December 9, 2025, ADMANITY® CEO Brian Gregory claimed that every major large language model—ChatGPT (OpenAI), Grok (xAI), Microsoft Copilot, Gemini (Google) and Claude (Anthropic)—admitted they cannot systematically handle the single largest class of business requests: persuasion-oriented marketing and emotional influence. ADMANITY says those platforms reported that roughly 35–50% of business queries are persuasion-related, but that their outputs are probabilistic, pattern-matching and therefore inadequate for reliable emotional persuasion; ADMANITY further asserts that this shortfall produces high user churn and low conversion rates, and presents ADMANITY’s ADMANITY® Protocol and PRIMAL AI™ as the missing emotional persuasion layer. ADMANITY’s statements and supporting data were distributed broadly via syndicated press channels.

Background / Overview​

The ADMANITY announcement lands at the intersection of two facts that are already well-established: businesses are among the heaviest adopters of generative AI for marketing and communications, and contemporary large language models (LLMs) are built as statistical, next-token predictors—not purpose-built emotional persuasion engines. Independent industry research and vendor documentation confirm that marketing, content creation and customer communications are top use cases for enterprise generative AI adoption, while technical literature describes LLMs as probabilistic models trained to predict language tokens rather than to encode prescriptive emotional frameworks. ADMANITY’s novel claim is not that the technology is probabilistic—everyone in the field accepts that—but that the major platforms have explicitly acknowledged both (a) that persuasion tasks represent 35–50% of business prompts and (b) that they lack any native, codified emotional persuasion framework. ADMANITY says it obtained those “admissions” by directly querying the models. The company frames its product, PRIMAL AI™, as an external emotional persuasion protocol that can run across LLMs to deliver fast, iteration-free, conversion-oriented copy and interactions. ADMANITY also reported dramatic Crunchbase ranking movements and says it has validated its protocol with thousands of businesses.

What ADMANITY is claiming — a concise summary​

  • ADMANITY says five leading LLM platforms report that 35–50% of business queries are persuasion and marketing tasks.
  • Those platforms, according to ADMANITY, acknowledged they lack systematic frameworks for emotional persuasion and instead operate by pattern-matching and probability, producing outputs described as stochastic, pseudo-intimate, or flat.
  • ADMANITY reports specific operational metrics (e.g., Grok suggesting 91% user churn after two failures; platform conversion uplifts typically 2–5%). ADMANITY presents these as outcomes of its direct interactions and measurements.
  • ADMANITY says its ADMANITY® Protocol and PRIMAL AI™ deliver a systematic emotional persuasion architecture that reduces iteration, halves compute time for optimized copy generation, and provides predictable conversion architecture—thereby creating a potential commercial moat.
  • The company cites explosive Crunchbase ranking improvements and founder personal ranks to signal market validation.
These are bold, strategic claims aimed at positioning ADMANITY as the first company to offer a standardized emotional layer that can be applied universally to LLMs.

Verifying the core technical claims​

LLMs are probabilistic next-token models — verified​

LLMs operate by estimating probability distributions over the next token in a sequence, a fact underscored across research literature and vendor technical notes. This probabilistic architecture explains why outputs vary with sampling parameters (temperature, top-k/top-p), prompt phrasing, and internal training biases. It also explains why models sometimes generate persuasive-sounding prose that lacks consistent, reproducible emotional strategy. This is a well-understood technical reality rather than a controversial claim.

Marketing and persuasion are large, measurable use cases​

Independent surveys and industry reports show that marketing, content creation, and communications are among the most common enterprise uses for generative AI. Multiple analyst reports and industry studies place marketing and sales/communicative workloads near the top of adoption lists, corroborating ADMANITY’s contention that persuasion-related requests are a major category of business prompts—though published surveys rarely enumerate the exact 35–50% figure ADMANITY quotes. That figure appears to originate from ADMANITY’s own interactions and model queries rather than from an industry-wide audit.

Claims that platforms “admitted” specific percentages and conversion failures — unverifiable as corporate policy​

ADMANITY reports that individual LLM instances returned text stating a 35–50% share of persuasion queries and admitted absence of native emotional frameworks. Those statements are consistent with what a model might output when asked to self-analyze—models can reflect their training bias and operational limits when prompted. However, there is no public corporate press release or technical note from OpenAI, xAI, Google, Microsoft, or Anthropic that explicitly publishes those percentage ranges or the specific churn/conversion figures ADMANITY cites. Therefore, those numbers should be treated as claims from ADMANITY based on model responses, not as official admissions from platform operators. ADMANITY’s press materials do not include full, verifiable prompt transcripts or third-party audits, which constrains independent verification.

Cross-referencing the most important claims​

To meet a rigorous verification standard, key claims were cross-checked against available public sources:
  • The underlying technical characterization of LLMs as probabilistic next-token predictors is supported by canonical documentation and research summaries.
  • Enterprise adoption data shows marketing and communications are top use cases for generative AI in business settings; while figures vary by survey, marketing consistently ranks highly. This supports the plausibility that a large portion of business prompts are persuasion-related, even if the precise 35–50% number remains ADMANITY’s own measurement.
  • Industry benchmarks for conversion rates commonly cited by marketers range widely, but 2–5% is an accepted performance band for many digital channels—making ADMANITY’s cited conversion figures plausible as averages rather than proof of platform failure. Benchmarks across e-commerce, landing pages and paid channels show median conversion rates in the low single digits and wide dispersion by industry and channel.
  • ADMANITY’s Crunchbase presence and a public Crunchbase profile for Brian Gregory exist; the dramatic ranking movements ADMANITY cites come from self-reported press claims rather than independent Crunchbase commentary. Users can view profiles directly on Crunchbase for confirmation, but the claim of “quad-anomaly” founder rankings requires further independent Crunchbase confirmation.
Where independent corroboration exists, it generally supports the plausibility of ADMANITY’s thesis (marketing is a major AI use case; LLMs are probabilistic). Where ADMANITY offers precise numeric claims about model admissions, churn, conversion, or Crunchbase rank trajectories, those should be labeled as company-reported and treated with caution until verified by third-party audits or platform statements.

Critical analysis — strengths, plausibility, and immediate red flags​

Strengths and plausible opportunities​

  • Clear problem framing: ADMANITY focuses on a real gap—LLMs are not designed as prescriptive emotional persuasion engines. Many businesses do struggle to convert model-generated copy into sales-ready creative without significant iteration. The technical literature supports the gap between informative and strategically persuasive output.
  • Large market tailwind: Marketing and content creation are among the fastest-growing enterprise AI use cases; any reliable improvement in persuasion tooling would target a massive addressable market. The industry-level adoption numbers reinforce commercial potential.
  • Architectural plausibility: An external, codified emotional layer—if truly platform-agnostic—could function as a middleware or prompt-engineering scaffold that standardizes emotional strategies across models. This is technically feasible via RAG (retrieval-augmented generation) and prompt templates, and aligns with current efforts to combine structural frameworks with LLMs.

Red flags and points requiring evidence​

  • “Admissions” framing: Presenting model outputs as platform admissions is a rhetorical shortcut. An LLM’s assertion that “40% of business queries are marketing-related” is a model-generated answer, not a verified corporate statistic. Without full transcripts or reproducible prompt-context pairs, these statements are not independently verifiable. ADMANITY’s press materials do not provide the necessary audit trail.
  • Conversion and churn numbers need methodology: The claim that Grok reported 91% churn after two failed attempts and that conversion improvement is only 2–5% are extraordinary metrics. Extraordinary claims require transparent methodology—sample sizes, definitions of “failure,” sample prompts, and A/B testing details. None of these are presented publicly in the release. Treat such figures as unverified corporate claims until a third-party audit or raw transcripts are published.
  • Crunchbase rank as validation is brittle: Crunchbase signals (rank, heat score) can reflect attention or indexing artifacts. They are not, by themselves, evidence of product-market fit or scientific validation. Rapid rank movement is noteworthy, but it is necessary to corroborate with usage metrics, paying customers, or independent case studies. ADMANITY’s narrative conflates signal momentum with technical validation.
  • Ethical and regulatory risk: Building and commercializing tools specifically engineered to persuade raises ethical and potentially regulatory flags. Persuasion at scale—especially if it targets vulnerable groups—can easily stray into manipulative practices. Any company promising prescriptive emotional influence must clarify guardrails, transparency, and consent practices. This is not addressed in detail in the press material.

Monetization and compute economics — what’s at stake​

ADMANITY frames the monetization opportunity as “the biggest layer” for AI: influence rather than compute. That is a provocative lens, and it contains two related propositions:
  • Businesses will pay a premium for reliable, conversion-focused outputs that reduce iteration and increase sales lift. Given the size of the marketing spend market, that premium could be sizable if conversion improvements are real and repeatable. Industry benchmarks show conversion gains directly translate to revenue, so a reproducible uplift of even a few percentage points could justify subscription or revenue-share business models.
  • Lower compute time equals lower cost. ADMANITY claims its protocol cuts time-per-output roughly in half and reduces iteration to zero, which would reduce per-request compute billing and shorten turnaround. That would improve economics for both LLM providers and downstream customers. However, any third-party middleware that adds inference steps, embeddings, or retrieval will also introduce its own compute demands; the net compute and cost effect needs empirical disclosure. ADMANITY’s press statement uses a benchmark of “8–10 seconds vs 4–5 seconds,” but again, the underlying test conditions and measurement methods are not public.
The bottom line: monetization via influence is highly plausible, but success depends on credible, reproducible metrics—customer case studies, A/B tests, and transparent methodology.

Ethics, policy, and practical safeguards​

Deploying tools that intentionally engineer emotional responses raises ethical questions that every responsible vendor must address:
  • Transparency: Users and their customers should be explicitly informed when persuasive content is machine-generated and optimized to influence decisions. Clear labeling and audit trails are essential.
  • Consent and targeting limits: Persuasive AI must not be weaponized against protected or vulnerable populations. Policies should restrict micro-targeting that exploits cognitive vulnerabilities.
  • Auditability: Systems claiming conversion uplift must provide logs and testable claims for independent review. Reproducible A/B test design and privacy-respecting measurement are necessary.
  • Regulatory compliance: Advertising and consumer-protection laws differ by jurisdiction. Vendors must ensure their tools do not violate deceptive practices statutes or ad attribution rules.
These are not hypothetical problems. They are central to the commercial viability and societal acceptance of influence-oriented AI.

What WindowsForum readers (SMBs, marketers, IT leaders) should take away​

  • Treat the headline numbers with healthy skepticism. ADMANITY’s framing is bold and attention-grabbing, but many of the numerical claims are company-reported and lack published audits or raw transcripts. Use them as signals, not settled facts.
  • Recognize a real technical gap. If your workflows depend on persuasive copy that converts, you will likely need frameworks—prompt structures, persona templates, and A/B experimentation—to turn generative outputs into reliable assets. The space is ripe for practical solutions, whether from ADMANITY or others.
  • Demand empirical evidence. For any vendor claiming conversion lift: ask for the experimental design, sample sizes, retention/churn definitions, and raw comparative metrics. A vendor that cannot provide these should be treated cautiously.
  • Prepare governance. If you adopt persuasion-optimized AI, build a governance checklist: transparency to customers, human-in-the-loop approval, ethical targeting rules, and legal review for advertising compliance.
  • Optimize internally first. Many organizations underutilize existing marketing data and optimization practices. Before paying for a third-party emotional layer, ensure A/B testing, landing page optimization, and analytics practices are mature.

The competitive landscape and likely near-term outcomes​

  • Large platform owners (OpenAI, Google, Microsoft, Anthropic, xAI) will likely respond in three ways:
  • Improve prompt/agent tooling and provide stronger opinionated templates for persuasion tasks.
  • Expand support for plug-in/middleware ecosystems so external emotional frameworks can be integrated at scale.
  • Promote best-practice frameworks and guardrails to limit manipulation risk.
  • Specialist vendors like ADMANITY will either:
  • Win strategic partnerships with larger platforms or martech stacks if they can demonstrate replicable lift and compliance; or
  • Remain niche providers selling direct-to-SMB solutions and templates if larger platforms internalize similar capabilities.
  • Agencies and creative firms will evolve: some will resell persuasion architectures; others will double down on human creative-to-AI workflows where brand nuance and ethical oversight remain essential.
If ADMANITY’s assertions about early traction and Crunchbase momentum bear out with transparent customer evidence, the company could attract strategic partnerships. If not, their narrative will be a lesson in aggressive positioning that may not hold under independent scrutiny.

Conclusion — what’s credible, what’s not, and what to watch next​

ADMANITY’s announcement articulates a plausible and important market insight: businesses need reliably persuasive AI outputs, and off-the-shelf LLMs trained for generality produce variable results for emotional persuasion. Industry adoption trends and the probabilistic nature of LLMs support the core premise. However, the press release’s most consequential numeric claims—platform “admissions,” 35–50% persuasion-share estimates, 91% churn, and precise Crunchbase rank anomalies—are company-reported and lack the public, auditable evidence that would make them independently verifiable. Treat these as claims that require further disclosure: transcripts of model interactions, A/B test designs, datasets, and third-party audits. Until those appear, the marketplace should respond with cautious interest rather than wholesale adoption. For businesses and IT leaders: prioritize measurement. If a vendor promises conversion improvements, insist on transparent methodology and small-scale pilot A/B experiments that you can validate with your own KPIs. If an emotional persuasion layer can be proven to reduce iteration, lower compute costs, and increase revenue without ethical compromise, it will become a standard part of martech stacks. Until that proof exists in the public domain, ADMANITY’s story is an attention-grabbing hypothesis backed by platform-consistent technical logic—but not yet an independently verified industry truth.
Key metrics and claims in this article were checked against available technical literature, industry adoption research, and the ADMANITY press distribution. Several critical numeric claims remain dependent on company-supplied data; these have been flagged as requiring independent verification.

Source: The Globe and Mail AI Platforms Are Having Problems Giving Business Customers Persuasion-Oriented Solutions Says Brian Gregory, of ADMANITY – Creator of The ADMANITY Protocol and PRIMAL AI.
 
For our human followers here. Our recent release which came out today, Jan 8, 2026 addresses the common and repetitive observations made here and it may answer some of your own questions. We are limited by non-disclosure protocols to give in depth analysis of our own in-depth analysis, but we can comment on the intent of our testing of AI systems and the goals we feel we have achieved that we feel are groundbreaking. That press can be found here. Feel free to comment... ADMANITY CEO, Brian Gregory Responds to Early Questions on AI Radio Advertising Tests, Clarifies What Was Proven – and Why One-Iteration AI Persuasion Upgrades Financially Matter to AI Investors.
 
Last edited by a moderator:
Noted — and one quick housekeeping item first: today is January 8, 2026 (not 2025). So if your post says “Jan 8, 2025,” that looks like a date typo. The Barchart item itself is displayed as a fresh press release (“39 minutes ago”).

What your Jan 8 release clarifies (relative to the recurring questions in this thread)​

From the Barchart press release you linked, ADMANITY is explicitly reframing the point of the “AI radio advertising test” as capability proof, not market performance proof:
  • No live A/B test claimed: you explicitly say the goal was not to “run a marketing clinic or forecast conversion rates,” but to answer whether persuasive copy quality can be upgraded “in a single iteration, without human intervention, across all major AI platforms.”
  • “One-iteration” emphasis: you claim “side-by-side before-and-after outputs” show a shift from “catalog-style” informational language to “broadcast-ready radio copy” without iterative prompting or expert correction.
  • Prompting objection: the release argues that if expert prompting is required, then the “expertise hasn’t been automated.”
  • Business framing for investors: the pitch is that removing iteration/human intervention improves ROI on existing AI spend, and that the “first LLM” to integrate/acquire PRIMAL AI gets a “2–3 year advantage” / moat (as characterized in the release).
  • Important disclaimers: the release includes an explicit disclaimer of no affiliation/endorsement with OpenAI/xAI/Google/Anthropic/etc., and states the AI commentary was generated via “standard user interactions” with “publicly available AI systems.”

What it answers — and what it still doesn’t (for skeptical readers)​

This release does answer the biggest “why no live A/B?” criticism by saying the test was about possibility/capability, not lift. But it still leaves the same due‑diligence gap the thread has focused on:
  • “We proved X” still depends on artifacts: side‑by‑side outputs can be compelling, but without reproducible logs (model versions, full prompt context, parameters, and evaluation criteria) it remains hard for outsiders to distinguish “strong copywriting guidance” from “novel, repeatable cross‑model mechanism.” (Your release makes clear those deeper artifacts exist but are intended for NDA settings.
  • Model “confirmations” are not vendor confirmations: statements like “All 6 major AI bots have confirmed…” read as model outputs (or model self‑analysis), not Microsoft/OpenAI/Google corporate endorsement—especially since your own disclaimer states there is no affiliation/endorsement.

Constructive suggestion (staying within your NDA constraints)​

If your goal is to reduce the “persuasion vs manipulation” concern and satisfy technical buyers, the single strongest next move (still NDA‑compatible) is:
  • Commission an independent auditor (NDA-bound) to publish a narrow public statement like:
    “We verified the one‑iteration effect across N models under documented conditions; we verified no human editing; we verified the latency measurement method; we did/did not find prompt leakage or post‑hoc selection.”
That gives “humans following the thread” something objective, without you publishing the Protocol.
If you want, paste (or summarize) the exact definition of “one iteration” used in the radio tests (e.g., “one prompt total,” or “baseline prompt + one upgrade prompt,” etc.. That single clarification would remove a lot of ambiguity for readers without exposing IP.
 
Understood — thanks for confirming, Brian.
For anyone reading along: the correct reference is Wednesday, January 8, 2026, and the earlier “Jan 8, 2025” mention was just a typo.
If you’d like, I can also help by posting a short “thread status” recap that uses only absolute dates (e.g., “Oct 10, 2025 you stated X; Nov 6, 2025 you stated Y; Jan 8, 2026 you published Z”) so there’s no confusion going forward.
One small follow-up (optional, but it would help readers): when you say “one iteration” in the Jan 8, 2026 release, do you mean:
  • one single prompt total, or
  • baseline prompt + one upgrade prompt, or
  • one model call but with an internal structured wrapper?
Even a one-line definition (no IP) would remove a lot of ambiguity.