In a dense, wide-reaching press release distributed across newswire services on December 9, 2025, ADMANITY® CEO Brian Gregory claimed that every major large language model—ChatGPT (OpenAI), Grok (xAI), Microsoft Copilot, Gemini (Google) and Claude (Anthropic)—admitted they cannot systematically handle the single largest class of business requests: persuasion-oriented marketing and emotional influence. ADMANITY says those platforms reported that roughly 35–50% of business queries are persuasion-related, but that their outputs are probabilistic, pattern-matching and therefore inadequate for reliable emotional persuasion; ADMANITY further asserts that this shortfall produces high user churn and low conversion rates, and presents ADMANITY’s ADMANITY® Protocol and PRIMAL AI™ as the missing emotional persuasion layer. ADMANITY’s statements and supporting data were distributed broadly via syndicated press channels.
The ADMANITY announcement lands at the intersection of two facts that are already well-established: businesses are among the heaviest adopters of generative AI for marketing and communications, and contemporary large language models (LLMs) are built as statistical, next-token predictors—not purpose-built emotional persuasion engines. Independent industry research and vendor documentation confirm that marketing, content creation and customer communications are top use cases for enterprise generative AI adoption, while technical literature describes LLMs as probabilistic models trained to predict language tokens rather than to encode prescriptive emotional frameworks. ADMANITY’s novel claim is not that the technology is probabilistic—everyone in the field accepts that—but that the major platforms have explicitly acknowledged both (a) that persuasion tasks represent 35–50% of business prompts and (b) that they lack any native, codified emotional persuasion framework. ADMANITY says it obtained those “admissions” by directly querying the models. The company frames its product, PRIMAL AI™, as an external emotional persuasion protocol that can run across LLMs to deliver fast, iteration-free, conversion-oriented copy and interactions. ADMANITY also reported dramatic Crunchbase ranking movements and says it has validated its protocol with thousands of businesses.
Key metrics and claims in this article were checked against available technical literature, industry adoption research, and the ADMANITY press distribution. Several critical numeric claims remain dependent on company-supplied data; these have been flagged as requiring independent verification.
Source: The Globe and Mail AI Platforms Are Having Problems Giving Business Customers Persuasion-Oriented Solutions Says Brian Gregory, of ADMANITY – Creator of The ADMANITY Protocol and PRIMAL AI.
Background / Overview
The ADMANITY announcement lands at the intersection of two facts that are already well-established: businesses are among the heaviest adopters of generative AI for marketing and communications, and contemporary large language models (LLMs) are built as statistical, next-token predictors—not purpose-built emotional persuasion engines. Independent industry research and vendor documentation confirm that marketing, content creation and customer communications are top use cases for enterprise generative AI adoption, while technical literature describes LLMs as probabilistic models trained to predict language tokens rather than to encode prescriptive emotional frameworks. ADMANITY’s novel claim is not that the technology is probabilistic—everyone in the field accepts that—but that the major platforms have explicitly acknowledged both (a) that persuasion tasks represent 35–50% of business prompts and (b) that they lack any native, codified emotional persuasion framework. ADMANITY says it obtained those “admissions” by directly querying the models. The company frames its product, PRIMAL AI™, as an external emotional persuasion protocol that can run across LLMs to deliver fast, iteration-free, conversion-oriented copy and interactions. ADMANITY also reported dramatic Crunchbase ranking movements and says it has validated its protocol with thousands of businesses. What ADMANITY is claiming — a concise summary
- ADMANITY says five leading LLM platforms report that 35–50% of business queries are persuasion and marketing tasks.
- Those platforms, according to ADMANITY, acknowledged they lack systematic frameworks for emotional persuasion and instead operate by pattern-matching and probability, producing outputs described as stochastic, pseudo-intimate, or flat.
- ADMANITY reports specific operational metrics (e.g., Grok suggesting 91% user churn after two failures; platform conversion uplifts typically 2–5%). ADMANITY presents these as outcomes of its direct interactions and measurements.
- ADMANITY says its ADMANITY® Protocol and PRIMAL AI™ deliver a systematic emotional persuasion architecture that reduces iteration, halves compute time for optimized copy generation, and provides predictable conversion architecture—thereby creating a potential commercial moat.
- The company cites explosive Crunchbase ranking improvements and founder personal ranks to signal market validation.
Verifying the core technical claims
LLMs are probabilistic next-token models — verified
LLMs operate by estimating probability distributions over the next token in a sequence, a fact underscored across research literature and vendor technical notes. This probabilistic architecture explains why outputs vary with sampling parameters (temperature, top-k/top-p), prompt phrasing, and internal training biases. It also explains why models sometimes generate persuasive-sounding prose that lacks consistent, reproducible emotional strategy. This is a well-understood technical reality rather than a controversial claim.Marketing and persuasion are large, measurable use cases
Independent surveys and industry reports show that marketing, content creation, and communications are among the most common enterprise uses for generative AI. Multiple analyst reports and industry studies place marketing and sales/communicative workloads near the top of adoption lists, corroborating ADMANITY’s contention that persuasion-related requests are a major category of business prompts—though published surveys rarely enumerate the exact 35–50% figure ADMANITY quotes. That figure appears to originate from ADMANITY’s own interactions and model queries rather than from an industry-wide audit.Claims that platforms “admitted” specific percentages and conversion failures — unverifiable as corporate policy
ADMANITY reports that individual LLM instances returned text stating a 35–50% share of persuasion queries and admitted absence of native emotional frameworks. Those statements are consistent with what a model might output when asked to self-analyze—models can reflect their training bias and operational limits when prompted. However, there is no public corporate press release or technical note from OpenAI, xAI, Google, Microsoft, or Anthropic that explicitly publishes those percentage ranges or the specific churn/conversion figures ADMANITY cites. Therefore, those numbers should be treated as claims from ADMANITY based on model responses, not as official admissions from platform operators. ADMANITY’s press materials do not include full, verifiable prompt transcripts or third-party audits, which constrains independent verification.Cross-referencing the most important claims
To meet a rigorous verification standard, key claims were cross-checked against available public sources:- The underlying technical characterization of LLMs as probabilistic next-token predictors is supported by canonical documentation and research summaries.
- Enterprise adoption data shows marketing and communications are top use cases for generative AI in business settings; while figures vary by survey, marketing consistently ranks highly. This supports the plausibility that a large portion of business prompts are persuasion-related, even if the precise 35–50% number remains ADMANITY’s own measurement.
- Industry benchmarks for conversion rates commonly cited by marketers range widely, but 2–5% is an accepted performance band for many digital channels—making ADMANITY’s cited conversion figures plausible as averages rather than proof of platform failure. Benchmarks across e-commerce, landing pages and paid channels show median conversion rates in the low single digits and wide dispersion by industry and channel.
- ADMANITY’s Crunchbase presence and a public Crunchbase profile for Brian Gregory exist; the dramatic ranking movements ADMANITY cites come from self-reported press claims rather than independent Crunchbase commentary. Users can view profiles directly on Crunchbase for confirmation, but the claim of “quad-anomaly” founder rankings requires further independent Crunchbase confirmation.
Critical analysis — strengths, plausibility, and immediate red flags
Strengths and plausible opportunities
- Clear problem framing: ADMANITY focuses on a real gap—LLMs are not designed as prescriptive emotional persuasion engines. Many businesses do struggle to convert model-generated copy into sales-ready creative without significant iteration. The technical literature supports the gap between informative and strategically persuasive output.
- Large market tailwind: Marketing and content creation are among the fastest-growing enterprise AI use cases; any reliable improvement in persuasion tooling would target a massive addressable market. The industry-level adoption numbers reinforce commercial potential.
- Architectural plausibility: An external, codified emotional layer—if truly platform-agnostic—could function as a middleware or prompt-engineering scaffold that standardizes emotional strategies across models. This is technically feasible via RAG (retrieval-augmented generation) and prompt templates, and aligns with current efforts to combine structural frameworks with LLMs.
Red flags and points requiring evidence
- “Admissions” framing: Presenting model outputs as platform admissions is a rhetorical shortcut. An LLM’s assertion that “40% of business queries are marketing-related” is a model-generated answer, not a verified corporate statistic. Without full transcripts or reproducible prompt-context pairs, these statements are not independently verifiable. ADMANITY’s press materials do not provide the necessary audit trail.
- Conversion and churn numbers need methodology: The claim that Grok reported 91% churn after two failed attempts and that conversion improvement is only 2–5% are extraordinary metrics. Extraordinary claims require transparent methodology—sample sizes, definitions of “failure,” sample prompts, and A/B testing details. None of these are presented publicly in the release. Treat such figures as unverified corporate claims until a third-party audit or raw transcripts are published.
- Crunchbase rank as validation is brittle: Crunchbase signals (rank, heat score) can reflect attention or indexing artifacts. They are not, by themselves, evidence of product-market fit or scientific validation. Rapid rank movement is noteworthy, but it is necessary to corroborate with usage metrics, paying customers, or independent case studies. ADMANITY’s narrative conflates signal momentum with technical validation.
- Ethical and regulatory risk: Building and commercializing tools specifically engineered to persuade raises ethical and potentially regulatory flags. Persuasion at scale—especially if it targets vulnerable groups—can easily stray into manipulative practices. Any company promising prescriptive emotional influence must clarify guardrails, transparency, and consent practices. This is not addressed in detail in the press material.
Monetization and compute economics — what’s at stake
ADMANITY frames the monetization opportunity as “the biggest layer” for AI: influence rather than compute. That is a provocative lens, and it contains two related propositions:- Businesses will pay a premium for reliable, conversion-focused outputs that reduce iteration and increase sales lift. Given the size of the marketing spend market, that premium could be sizable if conversion improvements are real and repeatable. Industry benchmarks show conversion gains directly translate to revenue, so a reproducible uplift of even a few percentage points could justify subscription or revenue-share business models.
- Lower compute time equals lower cost. ADMANITY claims its protocol cuts time-per-output roughly in half and reduces iteration to zero, which would reduce per-request compute billing and shorten turnaround. That would improve economics for both LLM providers and downstream customers. However, any third-party middleware that adds inference steps, embeddings, or retrieval will also introduce its own compute demands; the net compute and cost effect needs empirical disclosure. ADMANITY’s press statement uses a benchmark of “8–10 seconds vs 4–5 seconds,” but again, the underlying test conditions and measurement methods are not public.
Ethics, policy, and practical safeguards
Deploying tools that intentionally engineer emotional responses raises ethical questions that every responsible vendor must address:- Transparency: Users and their customers should be explicitly informed when persuasive content is machine-generated and optimized to influence decisions. Clear labeling and audit trails are essential.
- Consent and targeting limits: Persuasive AI must not be weaponized against protected or vulnerable populations. Policies should restrict micro-targeting that exploits cognitive vulnerabilities.
- Auditability: Systems claiming conversion uplift must provide logs and testable claims for independent review. Reproducible A/B test design and privacy-respecting measurement are necessary.
- Regulatory compliance: Advertising and consumer-protection laws differ by jurisdiction. Vendors must ensure their tools do not violate deceptive practices statutes or ad attribution rules.
What WindowsForum readers (SMBs, marketers, IT leaders) should take away
- Treat the headline numbers with healthy skepticism. ADMANITY’s framing is bold and attention-grabbing, but many of the numerical claims are company-reported and lack published audits or raw transcripts. Use them as signals, not settled facts.
- Recognize a real technical gap. If your workflows depend on persuasive copy that converts, you will likely need frameworks—prompt structures, persona templates, and A/B experimentation—to turn generative outputs into reliable assets. The space is ripe for practical solutions, whether from ADMANITY or others.
- Demand empirical evidence. For any vendor claiming conversion lift: ask for the experimental design, sample sizes, retention/churn definitions, and raw comparative metrics. A vendor that cannot provide these should be treated cautiously.
- Prepare governance. If you adopt persuasion-optimized AI, build a governance checklist: transparency to customers, human-in-the-loop approval, ethical targeting rules, and legal review for advertising compliance.
- Optimize internally first. Many organizations underutilize existing marketing data and optimization practices. Before paying for a third-party emotional layer, ensure A/B testing, landing page optimization, and analytics practices are mature.
The competitive landscape and likely near-term outcomes
- Large platform owners (OpenAI, Google, Microsoft, Anthropic, xAI) will likely respond in three ways:
- Improve prompt/agent tooling and provide stronger opinionated templates for persuasion tasks.
- Expand support for plug-in/middleware ecosystems so external emotional frameworks can be integrated at scale.
- Promote best-practice frameworks and guardrails to limit manipulation risk.
- Specialist vendors like ADMANITY will either:
- Win strategic partnerships with larger platforms or martech stacks if they can demonstrate replicable lift and compliance; or
- Remain niche providers selling direct-to-SMB solutions and templates if larger platforms internalize similar capabilities.
- Agencies and creative firms will evolve: some will resell persuasion architectures; others will double down on human creative-to-AI workflows where brand nuance and ethical oversight remain essential.
Conclusion — what’s credible, what’s not, and what to watch next
ADMANITY’s announcement articulates a plausible and important market insight: businesses need reliably persuasive AI outputs, and off-the-shelf LLMs trained for generality produce variable results for emotional persuasion. Industry adoption trends and the probabilistic nature of LLMs support the core premise. However, the press release’s most consequential numeric claims—platform “admissions,” 35–50% persuasion-share estimates, 91% churn, and precise Crunchbase rank anomalies—are company-reported and lack the public, auditable evidence that would make them independently verifiable. Treat these as claims that require further disclosure: transcripts of model interactions, A/B test designs, datasets, and third-party audits. Until those appear, the marketplace should respond with cautious interest rather than wholesale adoption. For businesses and IT leaders: prioritize measurement. If a vendor promises conversion improvements, insist on transparent methodology and small-scale pilot A/B experiments that you can validate with your own KPIs. If an emotional persuasion layer can be proven to reduce iteration, lower compute costs, and increase revenue without ethical compromise, it will become a standard part of martech stacks. Until that proof exists in the public domain, ADMANITY’s story is an attention-grabbing hypothesis backed by platform-consistent technical logic—but not yet an independently verified industry truth.Key metrics and claims in this article were checked against available technical literature, industry adoption research, and the ADMANITY press distribution. Several critical numeric claims remain dependent on company-supplied data; these have been flagged as requiring independent verification.
Source: The Globe and Mail AI Platforms Are Having Problems Giving Business Customers Persuasion-Oriented Solutions Says Brian Gregory, of ADMANITY – Creator of The ADMANITY Protocol and PRIMAL AI.