ADMANITY says five leading AI systems told the company that the single largest category of business queries — marketing and emotional persuasion — is not being handled reliably by current large language models, and the startup claims its ADMANITY® Protocol and upcoming PRIMAL AI™ solve that gap with a portable emotional persuasion layer that converts factual outputs into predictable, conversion‑oriented copy across multiple LLMs.
ADMANITY, a Phoenix‑based startup led by CEO Brian Gregory, recently rolled out an aggressive PR campaign claiming a model‑agnostic persuasion technology that it says was validated in cross‑vendor "Toaster Test" experiments with OpenAI’s ChatGPT, xAI’s Grok, Google’s Gemini, Microsoft Copilot, and Anthropic’s Claude. The company frames the problem bluntly: businesses ask AI for persuasive marketing copy and routinely receive informative but emotionally flat text, producing low conversion and high churn. ADMANITY presents the ADMANITY® Protocol and a sequestered "Mother Algorithm" as a deterministic emotional sequence that LLMs can follow to produce persuasive outputs in one pass.
Two key administrative signals ADMANITY highlights are a filed trademark for PRIMAL AI and a rapidly rising Crunchbase profile with high Heat Scores and founder ranking movements — metrics it uses to demonstrate market traction and interest. The company also published a series of syndicated press releases that recount model interactions and claim significant compute and conversion improvements when PRIMAL AI instructions are applied.
These announcements land at the intersection of two broad industry realities: (1) LLMs are powerful pattern‑matching engines that excel at information synthesis but are not designed as native emotional persuasion engines; and (2) monetization at scale for AI platforms increasingly depends on delivering measurable business outcomes — such as higher conversion rates — not only correct facts. ADMANITY’s pitch: the first platform to offer reliable, auditable, repeatable persuasion that demonstrably lifts conversions will capture a substantial commercial moat.
However, the company’s most headline‑grabbing assertions — that five leading AI platforms independently confirmed specific usage percentages and endorsed PRIMAL AI — rest on ADMANITY’s controlled interactions and press transcripts rather than on vendor‑signed confirmations, published audits or reproducible A/B test artifacts. Those company‑originated numbers and platform quotes should be treated with caution until independent verification (audited logs, third‑party labs, or vendor statements) appears in the public record.
At the same time, the boldest claims in ADMANITY’s rollout — uniform vendor confirmations, specific platform telemetry, and guaranteed conversion uplifts — remain company‑originated and have not been independently audited or vendor‑endorsed in the public domain. Buyers, platform owners and investors should therefore proceed with rigorous pilots, auditable evidence requests, contractual protections and ethical guardrails before treating promotional metrics as production facts. The first vendor to produce reproducible, auditable, and governed persuasion outcomes at scale will indeed rewrite monetization plays for LLMs — but proving that to regulators, customers and independent labs will determine whether the claim is transformative or merely persuasive marketing.
Source: The Globe and Mail AI Platforms Are Having Problems Giving Business Customers Persuasion-Oriented Solutions Says Brian Gregory, of ADMANITY – Creator of The ADMANITY Protocol and PRIMAL AI.
Background / Overview
ADMANITY, a Phoenix‑based startup led by CEO Brian Gregory, recently rolled out an aggressive PR campaign claiming a model‑agnostic persuasion technology that it says was validated in cross‑vendor "Toaster Test" experiments with OpenAI’s ChatGPT, xAI’s Grok, Google’s Gemini, Microsoft Copilot, and Anthropic’s Claude. The company frames the problem bluntly: businesses ask AI for persuasive marketing copy and routinely receive informative but emotionally flat text, producing low conversion and high churn. ADMANITY presents the ADMANITY® Protocol and a sequestered "Mother Algorithm" as a deterministic emotional sequence that LLMs can follow to produce persuasive outputs in one pass.Two key administrative signals ADMANITY highlights are a filed trademark for PRIMAL AI and a rapidly rising Crunchbase profile with high Heat Scores and founder ranking movements — metrics it uses to demonstrate market traction and interest. The company also published a series of syndicated press releases that recount model interactions and claim significant compute and conversion improvements when PRIMAL AI instructions are applied.
These announcements land at the intersection of two broad industry realities: (1) LLMs are powerful pattern‑matching engines that excel at information synthesis but are not designed as native emotional persuasion engines; and (2) monetization at scale for AI platforms increasingly depends on delivering measurable business outcomes — such as higher conversion rates — not only correct facts. ADMANITY’s pitch: the first platform to offer reliable, auditable, repeatable persuasion that demonstrably lifts conversions will capture a substantial commercial moat.
What ADMANITY is claiming
The headline assertions
- ADMANITY says five major LLMs independently confirmed that roughly 35–50% of business queries concern marketing and persuasion tasks and that current systems lack systematic frameworks for emotional persuasion.
- The company reports that baseline LLM output is often "flat, logical, and factual" while its ADMANITY® Protocol can transform that same input into persuasive copy in a single pass (the "Toaster Test"), producing faster, iterated‑free outputs with measurable conversion architecture.
- ADMANITY claims substantive operational and monetization benefits: reduced compute, halved generation time in tests, and potential multi‑billion dollar addressable markets if platforms monetize persuasion as a premium feature.
- The company also touts rapid Crunchbase rank movement and trademark filings as evidence of traction and imminently strategic acquisition interest.
What’s verifiable today — and what is not
- Verifiable: ADMANITY’s public filings and press distribution (trademark filing for PRIMAL AI and an active Crunchbase profile) are visible in the public record and have been widely syndicated. The company has published demonstration narratives (YES! TEST, Toaster Test) and distributed them to press outlets.
- Unverified: There is no public, vendor‑signed confirmation from OpenAI, Google, Microsoft, Anthropic or xAI that they formally endorsed or independently audited ADMANITY’s claims or Toaster Test methodology. The specific percentage breakdowns (e.g., 35–50% of business queries) and precise failure/churn statistics attributed to each platform appear to come from ADMANITY’s internal interactions and experimental transcripts rather than third‑party telemetry. Treat those headline figures as company‑originated until independent audits are published.
Why the core technical idea is plausible
LLM behavior and instruction sensitivity
Large language models are highly sensitive to prompt structure, context and exemplars. Engineering patterns that steer LLM outputs — prompt engineering, instruction tuning, prefix/prompt adapters, LoRA, middleware rewriting and reranking — are already established practices used by practitioners to modify tone, enforce brand voice, and optimize for task‑specific behaviors. Encoding a prescriptive emotional sequencing (e.g., problem recognition → resonance → social proof → urgency → CTA) into prompts or as an external adapter can change output behavior, often dramatically. This makes the concept of a persuasion "overlay" technically plausible.Integration patterns and trade‑offs
- Prompt wrapper (fastest, vendor‑agnostic): cheap to deploy but token‑expensive and brittle across differing system prompts and safety overlays.
- Adapter/internalization (LoRA/prefix tuning): efficient at scale, persistent behavior across calls, but requires hosting access or vendor cooperation.
- Middleware/orchestration (post‑generation rewrite or ranking): works across closed APIs and is auditable, but adds latency and operational complexity.
Critical analysis: strengths, credibility, and evidence gaps
Notable strengths
- The pitch meets a real market pain: many SMBs and mid‑market buyers want measurable sales outcomes from AI — not only coherent descriptions. Automation that reliably increases conversion rates could deliver major ROI and become a premium platform capability. ADMANITY’s framing aligns with this commercial logic.
- The technical idea maps cleanly to known LLM controls. There is nothing conceptually impossible about codifying persuasion sequences and using them as guidance or adapters to bias outputs toward emotional resonance.
- Administrative signals (trademark filing, Crunchbase momentum) indicate market visibility and a go‑to‑market push that could attract partner interest if the product proves reproducible.
Major evidence gaps and credibility issues
- The most consequential assertions — that five major AI platforms independently confirmed the 35–50% persuasion‑query figure and agreed they lack native emotional persuasion frameworks — are presented in ADMANITY’s PR materials based on interactions the company controlled. There is no public, vendor‑signed telemetry or independent auditing that corroborates those platform‑level percentages or vendor endorsements. Those claims should be read as claims, not platform policy statements.
- Conversion uplift and latency improvements are reported in company demos, not in reproducible, third‑party randomized trials with published raw logs, model versions, prompt transcripts, and statistically powered A/B analyses. Extraordinary commercial claims (multi‑billion dollar monetization upside, guaranteed conversion rates) require extraordinary evidence — which is not yet publicly available.
- Crunchbase rank and Heat Score movements are interesting attention signals but are not technical validation. They can reflect PR traction rather than product efficacy; treat them as market interest metrics, not proof of universal effectiveness.
Independent corroboration where available
Independent technical literature and industry research confirm two crucial premises: (1) LLMs operate largely as statistical, pattern‑matching systems and can produce outputs that sound persuasive without a model of human emotion, and (2) affective computing and emotional AI remain nascent with serious challenges around dataset bias, cultural nuance and ethical constraints. These independent findings support the plausibility of ADMANITY’s thesis — that emotional persuasion is a distinct engineering problem — but they do not validate the company's specific numeric claims or vendor endorsements.Business and monetization implications
Why platforms would pay for reliable persuasion
- Ad economics scale: small percentage conversion uplifts on massive advertising and commerce funnels translate into billions of dollars in incremental value for platform owners and merchants. ADMANITY frames the opportunity as a new monetization layer for LLMs, CRMs and martech stacks. The arithmetic is straightforward and compelling at scale.
- Productization paths: vendors could offer persuasion as a premium API add‑on, a "Copilot Revenue Edition", or outcome‑based pricing. Martech vendors and CRM providers could upsell this capability to SMEs lacking agency budgets.
Business risks and vendor calculus
- Build, partner, or block: platform owners face three choices — build an in‑house persuasion stack (control, high cost), partner/license (speed, vendor dependency), or block/limit features (avoid regulatory and reputational exposure). Any decision will hinge on reproducible evidence, safety audits, contractual protections and the vendor’s appetite for regulatory risk.
- Vendor dependency and antitrust risk: if a single third party becomes the gatekeeper to conversion performance, platforms and advertisers could face vendor lock‑in and antitrust scrutiny — a material strategic consideration.
Ethics, safety and regulatory perspective
Manipulation vs. persuasion
The line between legitimate persuasion (transparent, non‑exploitative) and unethical manipulation (covert, exploiting vulnerabilities) is legally and ethically salient. Recent academic and regulatory discussion increasingly treats manipulative automated systems as high‑risk. A persuasion layer that systematizes emotional nudging at scale invites scrutiny under consumer protection law, advertising standards and emerging AI regulation frameworks.Regulatory landscape and enforcement trends
- EU AI Act: includes provisions that address manipulative systems and categorizes some forms of emotional targeting as prohibited or high‑risk; automated persuasion used to exploit vulnerabilities could trigger compliance obligations and audits under the Act.
- U.S. oversight: the Federal Trade Commission has signaled scrutiny of deceptive or manipulative chatbot and AI uses; the regulatory tide is moving toward enforcement on transparency, unfair practices and deceptive claims. Companies must be prepared for consequence management if persuasion features mislead consumers.
Practical governance requirements
Any vendor or platform integrating automated persuasion should implement these safeguards:- Human‑in‑the‑loop approvals for high‑impact content and regulated domains.
- Auditable experiment logs (full prompts, model versions, token accounting, time‑stamped outputs) for contractual and regulatory review.
- Explicit redlines and non‑deployment zones (e.g., political messaging, health claims, offers to vulnerable groups) baked into both technology and contracts.
- Clear consumer disclosures where persuasive automation is used and opt‑out mechanisms where appropriate.
How IT, marketing and procurement teams should evaluate ADMANITY (or any persuasion‑layer vendor)
- Define outcome metrics clearly: conversion rate, average order value, refund/complaint rates, and customer retention.
- Demand auditable evidence: raw prompts, model parameters (temperature, decoding), model versions, token counts and time‑stamped output logs under NDA or in a sandbox.
- Run randomized A/B tests with statistically powered sample sizes: baseline (current copy) vs. vendor‑augmented outputs vs. human high‑quality control. Publish methodology and guard for p‑hacking.
- Insist on human approval gates and safety filters during rollout; define explicit redlines for vulnerable populations and regulated messages.
- Negotiate contractual protections: indemnities, non‑training clauses (to prevent model retraining), audit rights and SLAs tied to auditable metrics.
- Weeks 1–2: Select representative funnel and capture baseline metrics.
- Weeks 3–6: Integrate vendor guidance via middleware or prompt wrapper; generate multiple variants and instrument tracking.
- Weeks 7–10: Run randomized experiments, collect downstream metrics and monitor safety signals.
- Weeks 11–12: Evaluate results, conduct legal and compliance reviews, and decide on scale and contract terms.
The ADMANITY narrative — measured verdict
ADMANITY’s central thesis — that current LLMs do not natively encode a formal, repeatable emotional persuasion framework and that a portable, auditable persuasion layer could unlock substantial commercial value — is credible at a conceptual level and anchored in known model dynamics and marketing science. The ADMANITY® Protocol maps onto well‑understood engineering patterns and addresses a genuine market pain: turning AI‑generated content into measurable revenue.However, the company’s most headline‑grabbing assertions — that five leading AI platforms independently confirmed specific usage percentages and endorsed PRIMAL AI — rest on ADMANITY’s controlled interactions and press transcripts rather than on vendor‑signed confirmations, published audits or reproducible A/B test artifacts. Those company‑originated numbers and platform quotes should be treated with caution until independent verification (audited logs, third‑party labs, or vendor statements) appears in the public record.
What to watch next
- Independent replication: publication of raw prompts, model versions, and reproducible A/B test logs will move ADMANITY’s claims from interesting demo to production proof.
- Vendor responses: formal statements from OpenAI, Google, Microsoft, Anthropic or xAI either corroborating or distancing themselves from the Toaster Test will materially affect credibility and partnership prospects.
- Regulatory cues: concrete enforcement actions or clarifying guidance on automated persuasion from bodies like the FTC, EU regulators or national advertising authorities will shape acceptable product designs.
- Market adoption signals: pilots with verifiable, published outcomes in enterprise customers (with audit logs) will be the clearest indicator of product‑market fit and durable monetization.
Conclusion
There is a clear and credible opportunity at the intersection of emotional persuasion and LLM output control: businesses want not just coherent content but copy that reliably drives human action. ADMANITY has packaged a compelling narrative, administrative traction and demonstration artifacts around PRIMAL AI and the ADMANITY® Protocol. Technically, a persuasion overlay is plausible and maps to known instruction and adapter patterns; commercially, conversion uplift is an easy sell because of the enormous downstream value.At the same time, the boldest claims in ADMANITY’s rollout — uniform vendor confirmations, specific platform telemetry, and guaranteed conversion uplifts — remain company‑originated and have not been independently audited or vendor‑endorsed in the public domain. Buyers, platform owners and investors should therefore proceed with rigorous pilots, auditable evidence requests, contractual protections and ethical guardrails before treating promotional metrics as production facts. The first vendor to produce reproducible, auditable, and governed persuasion outcomes at scale will indeed rewrite monetization plays for LLMs — but proving that to regulators, customers and independent labs will determine whether the claim is transformative or merely persuasive marketing.
Source: The Globe and Mail AI Platforms Are Having Problems Giving Business Customers Persuasion-Oriented Solutions Says Brian Gregory, of ADMANITY – Creator of The ADMANITY Protocol and PRIMAL AI.
