Microsoft’s Copilot produced a wildly entertaining — and instructive — first-round mock of the 2026 NFL Draft after Week 1, exposing both the speed and the limits of conversational AI when it tries to translate fuzzy, fast-moving sports data into roster decisions.
USA TODAY’s experiment asked Microsoft Copilot a deceptively simple question for each pick: “With the __ pick in the 2026 NFL Draft, who will the [TEAM NAME] select?” The result was an AI-driven, single-pass first-round mock that reads like a mashup of fandom, statistical priors, and the occasional hallucination. The published mock included dozens of eyebrow-raising choices — duplicate selections, improbable positional priorities, and candidates omitted despite widely discussed pre-draft standing — which together make the piece an unexpectedly useful case study in how large language models reason about sports scenarios.
USA TODAY’s broader Copilot project (which also produced Week 1 and Week 2 game predictions) shows the newsroom workflow needed to operate these systems: repeatable prompts, human verification steps, and selective re-prompting when the assistant relies on stale roster data. That editorial choreography is as important as the output itself.
One specific number referenced in the broader experiment — that home teams went 13–5 in Thursday games during the 2024 season — was flagged internally as plausible but not immediately verifiable without a granular, play-by-play aggregation. That careful caveat is exactly the right editorial posture: report interesting model outputs, but flag numbers that need independent confirmation.
When an assistant’s pick depends on high-leverage facts (injuries, suspensions, trades), the editorial rule should be to withhold publication until an independent, primary-source confirmation exists.
AI is ready to be a powerful assistant to draft analysts, not a replacement. When publishers and teams adopt the practical safeguards outlined above — enforce transactional draft state, integrate live feeds, present distributions rather than single picks, and maintain human oversight — Copilot-style tools can accelerate coverage and enrich fan engagement without sacrificing accuracy or editorial integrity. Until those safeguards are standard, Copilot mock drafts will remain fascinating and fun, but they should be read as imaginative scenarios, not definitive forecasts.
Source: USA Today NFL mock draft 2026: Copilot AI predicts the entire first round after Week 1 results
Background
USA TODAY’s experiment asked Microsoft Copilot a deceptively simple question for each pick: “With the __ pick in the 2026 NFL Draft, who will the [TEAM NAME] select?” The result was an AI-driven, single-pass first-round mock that reads like a mashup of fandom, statistical priors, and the occasional hallucination. The published mock included dozens of eyebrow-raising choices — duplicate selections, improbable positional priorities, and candidates omitted despite widely discussed pre-draft standing — which together make the piece an unexpectedly useful case study in how large language models reason about sports scenarios.USA TODAY’s broader Copilot project (which also produced Week 1 and Week 2 game predictions) shows the newsroom workflow needed to operate these systems: repeatable prompts, human verification steps, and selective re-prompting when the assistant relies on stale roster data. That editorial choreography is as important as the output itself.
Overview: What Copilot picked — and why it matters
The AI mock reads like a highlight reel of why automated sports forecasts are both intriguing and treacherous. Highlights and notable quirks include:- Multiple quarterbacks — including names like Garrett Nussmeier, Cade Klubnik, and Arch Manning — appear repeatedly across the board, sometimes landing in places that contradict likely team needs or existing roster investments.
- Unusual positional choices at early slots (for example, a linebacker projected in the top 10 to Baltimore) and repeated picks of the same prospects for different teams suggest the model leaned on a set of heuristics rather than a coherent, draft-market-aware strategy.
- Several high-profile omissions and surprising lower-round placements (e.g., Caleb Downs not appearing early) illustrate that the model’s priors and retrieval context were incomplete or misweighted.
How Copilot “thinks” about drafts: observable heuristics
Copilot’s decisions in the mock reflect a handful of clear heuristics that surface repeatedly in AI-assisted sports outputs:- Favor experienced and proven quarterback archetypes (Copilot repeatedly prioritizes quarterback pedigree). This aligns with human draft logic because QBs are high-leverage assets, and prior QB success is an often-used predictor.
- Reward positional scarcity and perceived need without full context. The system will load up on pass rushers and offensive tackles if those classes are strongly represented in its retrieval buffer.
- Reuse and repeat high-confidence names when the model lacks a mechanism to model scarcity across teams. That leads to duplicated picks or the same player appearing on multiple boards.
- Default to prototypical picks and round numbers rather than calibrated probability distributions — a single deterministic pick or score is the model’s default output unless otherwise prompted.
Notable selections and what they reveal
The mock’s most newsworthy elements are the concrete selections that contradict draft-market logic. Several examples illustrate deeper failure modes and human editorial lessons.Duplicate and improbable selections
- Arch Manning listed for both New England and Miami; Peter Woods and Jermod McCoy appear multiple times. These duplicates signal that Copilot did not incorporate a dynamic live market constraint (if one team takes a player, the player should no longer be available). This is an inherent problem when an assistant answers per-team prompts independently without transactional state.
Positional oddities at premium picks
- A linebacker projected to Baltimore in the top-10 and an inside linebacker toward the early end are examples where team context matters: if Baltimore were picking in the Top 10, that would imply a catastrophic season, and the selection of a prospect projected in later rounds would reflect either stale grade or a misunderstanding of team draft strategy. This emphasizes the need for models to incorporate organizational context and roster construction logic.
Quarterback valuations out of sync with roster realities
- Teams that recently added QBs — or that have stable, proven starters — still took quarterbacks in Copilot’s mock. The model’s strong priors in favor of QB value overwhelmed its ability to reconcile those priors with short-term roster configurations and historical investments. That suggests the model relies heavily on positional value heuristics rather than a model of each team’s actual roster and cap situation.
Technical reasons for the mock’s oddities
Understanding why Copilot produced these outputs is the key to fixing them. The assistant’s behavior stems from multiple technical realities:- Retrieval latency and stale data: conversational assistants often combine a reasoning core with retrieval layers that index news, databases, and in-house content. If week-of roster changes or draft evaluations are absent from the retrieval index, the model will make predictions on older priors. USA TODAY’s workflow explicitly re-prompted Copilot when roster errors appeared — a necessary editorial step.
- Lack of market modeling: the assistant answered discrete, per-team prompts that did not enforce a draft-market constraint (no “player already drafted” state). The result: duplicate picks and unrealistic availability. This is not a hallucination so much as a mis-specified task.
- Heuristic-heavy scoring: Copilot frequently returned prototypical outputs (for example, winners scoring in the mid-to-high 20s in game predictions). Similarly, in draft mode, the assistant defaults to familiar names that fit positional heuristics rather than simulating a full draft process.
- Deterministic vs. probabilistic framing: by default Copilot returns single recommended outcomes, not distributions. Drafts are inherently probabilistic — grade bands, NFL team fit, and scheme compatibility matter — but Copilot’s single-answer style masks that uncertainty.
Editorial and product recommendations: how to use AI for draft coverage
If newsrooms, fan sites, or teams want to use AI responsibly to produce mock drafts or draft-day assistance, here is a pragmatic playbook that addresses the weaknesses demonstrated by Copilot’s mock:- Enforce a draft-market state engine
- Run a single session that tracks selections and removes prospects from the available pool as picks are emitted.
- Integrate live, authoritative data feeds
- Ingest team depth charts, official injury reports, combine measurables, and up-to-the-minute beat reporting to reduce stale inputs.
- Ask for distributions, not single picks
- Prompt the model for probability bands (e.g., “What is the likelihood this player goes in the top 10?”) or provide best/worst/most-likely scenarios.
- Human-in-the-loop verification
- Require that any roster, injury, or contract claim be validated against an independent team or league release before publication.
- Disclose model identity and data cutoff
- Always publish the assistant’s data cutoff timestamp and whether human edits were applied.
- Use ensembling and constrained sampling
- Combine multiple runs or models (historic mock drafts, analytics models, and human scouts) to average out stylistic biases.
- Add provenance metadata
- Track prompt variants, retrieval snapshots, and editorial corrections so each pick is auditable.
Strengths of AI-assisted mock drafts
Despite the failings, Copilot’s mock contains useful virtues that editors and teams can exploit:- Speed and scale: an assistant can generate a full first round in seconds, enabling rapid scenario experiments (best-case/worst-case boards).
- Explainability on demand: conversational models can provide instant rationales (“why this pick?”) which editors can reuse as copy or fact-checking prompts.
- Pattern surface: AI often surfaces conventional wisdom quickly (positional value, perceived team needs), which is valuable for framing human analysis.
- Iterative refinement: as USA TODAY showed, Copilot can revise picks once given corrected facts — a capability that supports editorial workflows when properly managed.
Risks and potential harms
Using AI for mock drafts — and publishing those mocks — carries non-trivial risks that publishers must manage.- Market amplification and feedback loops: widely distributed AI picks can influence betting lines and fan expectations. If large audiences treat deterministic picks as authoritative, the resulting market reactions can create self-fulfilling distortions.
- Reputation risk from factual errors: publishing a pick that relies on an outdated injury report or misstates a player’s eligibility can harm trust and lead to legal exposure.
- Overconfidence: deterministic AI outputs give a false impression of certainty where there should be humility. Readers need probability bands and editorial context.
- Governance and vendor lock-in: teams or leagues embedding a single vendor’s assistant into scouting or sideline workflows must plan for auditability, data governance, and human oversight.
Cross-checks and verifications: where Copilot got close — and where it needed help
USA TODAY’s larger Copilot experiments included verification steps for game predictions: checking QB Week 1 histories, roster changes, and injury reports against independent databases. The newsroom flagged several checks as high-confidence verifications — for example, Patrick Mahomes’ Week 1 career totals were validated via a statistical aggregator, and Chargers roster moves were confirmed through team and league reporting. Those validation steps are the practical backbone of safe AI usage in sports journalism.One specific number referenced in the broader experiment — that home teams went 13–5 in Thursday games during the 2024 season — was flagged internally as plausible but not immediately verifiable without a granular, play-by-play aggregation. That careful caveat is exactly the right editorial posture: report interesting model outputs, but flag numbers that need independent confirmation.
When an assistant’s pick depends on high-leverage facts (injuries, suspensions, trades), the editorial rule should be to withhold publication until an independent, primary-source confirmation exists.
Practical example: a revised, safer AI mock-draft workflow
Below is a sequential workflow for a newsroom or fan site to produce an AI-assisted first-round mock that’s defensible and useful.- Pull official team rosters, NFL injury reports, combine data, and latest beat reporting into a canonical ingestion pipeline.
- Initialize a single draft session that enforces pick availability (no duplicates).
- Prompt the assistant for each pick with the team’s roster context and ask for:
- Top three players the team is likely to pick
- A graded probability for each (e.g., 45%, 30%, 25%)
- A short rationale citing the three strongest factors
- Collect the assistant’s responses and run an automated fact-checker to flag any claims requiring human confirmation.
- Have two editors validate flagged items against team releases or beat reporting.
- Publish the mock with:
- Data-cutoff timestamp
- A short explanation of methodology (how the model was prompted, whether picks were constrained)
- Probability bands for each selection
What this means for fans, bettors, and teams
- Fans: AI-assisted mock drafts make for entertaining, rapid-fire content that surfaces long and short lists quickly. Treat them as conversation starters.
- Bettors: Do not treat single deterministic AI picks as betting advice. Convert AI outputs into probabilistic forecasts, then triangulate with market odds and official reports.
- Teams and scouts: AI can accelerate scouting and scenario planning, but any operational use on draft day requires auditable provenance and human-in-the-loop checks.
Conclusion
USA TODAY’s Copilot-powered mock draft is an instructive experiment: it demonstrates the strengths of conversational assistants — rapid scenario generation, transparent rationales, and the ability to surface conventional football heuristics — while also illuminating the design and editorial gaps that produce comical or misleading outputs. The duplicates, positional oddities, and roster misreads are not evidence that AI “can’t” do this work; they are evidence that current models and workflows require market modeling, fresh authoritative inputs, and probabilistic framing to be reliable in a zero-sum system like the NFL Draft.AI is ready to be a powerful assistant to draft analysts, not a replacement. When publishers and teams adopt the practical safeguards outlined above — enforce transactional draft state, integrate live feeds, present distributions rather than single picks, and maintain human oversight — Copilot-style tools can accelerate coverage and enrich fan engagement without sacrificing accuracy or editorial integrity. Until those safeguards are standard, Copilot mock drafts will remain fascinating and fun, but they should be read as imaginative scenarios, not definitive forecasts.
Source: USA Today NFL mock draft 2026: Copilot AI predicts the entire first round after Week 1 results