Sandvik’s December Promptathon proved that the future of frontline productivity won’t be written in code alone, but in the questions people ask their AI tools—and the company just ran a live experiment to teach its workforce how to ask better ones.
Sandvik staged its inaugural global Promptathon in December as a short, focused exercise designed to teach employees practical prompt skills for Microsoft Copilot and to surface ideas that could be scaled across the business. Teams worldwide had two hours to use Copilot to solve real business problems—ranging from efficiency improvements to reimagined onboarding—then pitched their concepts to judges who scored for strategic alignment, tangible value, originality and scalability. The event opened with a masterclass from Usman Afzal, a Microsoft Solutions Architect and productivity coach, who presented a practical “four‑ingredient” prompting framework to keep AI outputs grounded in business needs. A Brazilian team won with “Sandvik Game Master,” a gamified onboarding and training concept aimed at speeding competence building while reinforcing safety culture—an outcome Sandvik’s CEO, Stefan Widing, publicly praised as “bold” and “human‑centered.” This single, concise program is an example of a broader shift: corporations are moving from technology pilots toward people‑centered education in AI literacy so the workforce can extract real value from in‑app assistants like Microsoft 365 Copilot. Microsoft’s Copilot strategy is explicitly built on embedding LLM‑based assistance inside Word, Excel, PowerPoint, Outlook and Teams—and the vendor now offers structured upskilling (Copilot Academy) for employees to learn those skills.
The question for leaders is no longer whether AI will change work—but how quickly their people can learn to ask the right questions.
Source: Sandvik Group Inside the First Sandvik Promptathon: How AI Prompting Is Shaping the Future of Work
Background
Sandvik staged its inaugural global Promptathon in December as a short, focused exercise designed to teach employees practical prompt skills for Microsoft Copilot and to surface ideas that could be scaled across the business. Teams worldwide had two hours to use Copilot to solve real business problems—ranging from efficiency improvements to reimagined onboarding—then pitched their concepts to judges who scored for strategic alignment, tangible value, originality and scalability. The event opened with a masterclass from Usman Afzal, a Microsoft Solutions Architect and productivity coach, who presented a practical “four‑ingredient” prompting framework to keep AI outputs grounded in business needs. A Brazilian team won with “Sandvik Game Master,” a gamified onboarding and training concept aimed at speeding competence building while reinforcing safety culture—an outcome Sandvik’s CEO, Stefan Widing, publicly praised as “bold” and “human‑centered.” This single, concise program is an example of a broader shift: corporations are moving from technology pilots toward people‑centered education in AI literacy so the workforce can extract real value from in‑app assistants like Microsoft 365 Copilot. Microsoft’s Copilot strategy is explicitly built on embedding LLM‑based assistance inside Word, Excel, PowerPoint, Outlook and Teams—and the vendor now offers structured upskilling (Copilot Academy) for employees to learn those skills. Why a Promptathon matters: the skill at the center of AI productivity
Prompting as a workplace literacy
As corporate AI moves away from isolated, IT‑led automation to everyday assistants embedded in apps, the dominant bottleneck to impact shifts from hardware and models to human‑AI communication. A well‑constructed prompt is a compact brief: it tells the model who the audience is, what the desired output format is, what constraints exist, and what evidence should be returned. Short investments in prompt literacy can multiply the quality and repeatability of AI outputs across thousands of tasks. Sandvik’s Promptathon is a practical manifestation of that idea.The business logic
There are three economic reasons organisations are running events like this:- Rapid scale of access: Copilot and similar assistants are now part of mainstream productivity apps, so a large fraction of knowledge workers will interact with them daily.
- Skill asymmetry: Employees who master prompt framing get outsized productivity gains because AI tools magnify the effect of small, high‑quality inputs.
- Low deployment friction: Prompting experiments are low‑cost pilots—no new code base or heavy engineering required—so companies can test, learn, and scale successful patterns quickly.
What Sandvik delivered: structure, winners, and the “four ingredients”
Event design that maps to outcomes
Sandvik’s format was intentionally short (two hours) to force focus and avoid “over‑design.” Teams prepared prompt sequences in Copilot, iterated quickly, and presented concise business cases to judges who evaluated four criteria: strategic alignment, tangible value, originality, and scalability. This scoring rubric is pragmatic: it privileges ideas that can be tested, measured and expanded rather than nice one‑offs that never leave the slide deck.The four‑ingredient prompting formula
Although Sandvik’s writeup does not publish every detail of Usman Afzal’s masterclass, it highlights that structured prompts with modular components produce more reliable outputs. In practice, this “four‑ingredient” approach mirrors emerging industry playbooks:- Role framing: assign the model a persona (e.g., “act as a safety training designer”).
- Context: provide the underlying doc, dataset, or example content to ground answers.
- Constraints & format: specify length, output structure, or explicit checks (tables, step lists).
- Verification requests: ask the model to list sources, highlight assumptions or return a “things to fact‑check” list.
The winning idea: Sandvik Game Master
The Brazilian team’s “Sandvik Game Master” reframes onboarding and training as a gamified series of “worlds” where complex processes are taught through progressive challenges. The goal is faster competency, reinforced psychological safety, and peer collaboration—an explicitly human‑centric design that views AI as an accelerator of learning rather than a replacement for supervision. Sandvik’s CEO framed the idea as a practical innovation that empowers teams rather than displacing them.What this means for IT, HR and leaders: immediate operational implications
1. Training, not just tools
Deploying Copilot or any enterprise assistant without an upskilling plan will lead to inconsistent results and “workslop”—AI‑polished artifacts that still require substantial human revision. Practical programs combine short masterclasses, reusable prompt templates, and a central repository of validated prompts. Sandvik’s Promptathon paired a Microsoft masterclass with a competition to create templates that can be reused—an efficient way to both teach and harvest repeatable assets.2. Governance baked into rollout
The event—and subsequent playbook—should not be mistaken for a replacement for governance. Modern AI deployments demand documented templates, logging, and human‑in‑the‑loop gates. Logging should include the prompt template ID, model variant, context snapshot, user identifier, the raw outputs and human edits—data that’s required for debugging, compliance and to measure ROI over time. These operational controls are already standard advice from enterprise practitioners.3. Measurable KPIs
Pilot projects must track useful metrics: time saved per task, human edits per output, number of hallucinatory claims caught, adoption rate and downstream business outcomes (e.g., new hire time‑to‑competence in training pilots). Without these, AI experiments remain anecdotal. Sandvik’s scoring (value, strategic alignment, scalability) helps shift attention to outcomes that can be measured or A/B tested.Broader technical and organizational context
Copilot in the enterprise ecosystem
Microsoft has embedded Copilot across Microsoft 365 apps to leverage the Microsoft Graph and internal context within documents, chats and meetings. That integration makes Copilot uniquely positioned to perform synthesis tasks (meeting summaries, cross‑document briefs and data explorations) when prompts are well formed. Microsoft has also invested in formalized user training (Copilot Academy) to accelerate proficiency across organizations. These vendor moves explain why corporations pair tool rollouts with human training programs.Why role‑based prompts and templates are evolving into IP
Teams are beginning to version prompts like code: template ID, owner, acceptable inputs, expected output format and risk level. This makes prompts reusable assets and creates a governance surface where organizations can control model access and provenance. For regulated workflows (finance, legal, audits), requiring explicit source mapping and a “things to check” output is becoming mandatory. The practical playbook for prompt governance includes templates, human verification steps and an audit log—best practices that practitioners are already documenting.Risks and blind spots: what Promptathons don’t eliminate
Hallucinations and false certainty
Even with good prompts, models can generate plausible but incorrect statements. The cure is not perfect prompting alone; it’s a combination of grounding (upload source documents or use retrieval‑augmented prompts), explicit verification steps and human sign‑off for anything that could cause reputational or legal harm. This is particularly important for safety‑critical or compliance‑sensitive output.Workslop and downstream friction
A known failure mode is “generate‑and‑send”: users generate content, polish it lightly, and forward it without proper verification. That creates more downstream work and erodes trust in AI outputs. Promptathons teach skills, but organizations must also change incentives and quality gates so AI becomes an aid, not a shortcut for skipping review.Shadow AI adoption
If only pockets of the organization get training and templates, other groups may adopt third‑party AI tools unsupervised—creating data leakage and governance blind spots. Central teams must pair empowerment with guardrails: access controls, DLP policies and clear rules about uploading internal data into third‑party services. Microsoft’s enterprise controls for Copilot are part of that solution, but corporate policy and monitoring are required to enforce them.Practical playbook: how to make a Promptathon pay off
- Define the objective and guard rails before the event
- Pick 2–3 workflows where demonstrable time savings or quality improvements are plausible (e.g., onboarding, meeting prep, routine reporting).
- Define acceptable data use, DLP constraints and the approval process for publishing any agent or template.
- Run a short, structured event that teaches and harvests simultaneously
- Combine a 30–45 minute masterclass on the prompting framework with a focused build-and-pitch session (two-hour windows work).
- Require teams to submit a prompt template, a short rationale for value, and a minimal pilot plan.
- Score for deployability and scale, not novelty
- Evaluate submissions for measurable impact, ease of scaling, and risk profile. Prioritise ideas that can be piloted with a named owner and measurable KPIs.
- Version, log and publish a central prompt registry
- Make every validated prompt a controlled asset with a template ID, owner, acceptable input types, model choice and an associated risk level.
- Close the loop with measurement and iteration
- Run pilots, measure time saved and human edits, and iterate both prompt wording and human verification checks. Incentivise prompt re‑use with internal rewards and recognition.
How leaders should think about scale and talent
- Treat prompt literacy as core professional development: include short micro‑courses in onboarding and ongoing L&D. Microsoft’s Copilot Academy model is an example of vendor‑aligned, company‑specific training that can be adopted or adapted.
- Build an “AI champions” network: small cross‑functional groups that curate templates, monitor quality and surface high‑value templates to the organization.
- Create a clear taxonomy of risk: label templates by the level of proof required—draft only, requires human fact‑check, or ready for publish—then enforce those levels through workflow controls.
The strategic upside: what organizations actually gain
When done well, prompt literacy programs unlock three kinds of competitive advantage:- Tactical speed: routine cognitive chores (summarising, first drafts, data triage) are completed faster, giving people time for higher‑value analysis and judgment.
- Consistency at scale: repeatable templates reduce variance and make outputs auditable, improving quality and enabling automation where appropriate.
- Talent leverage: employees who can frame problems well become amplifiers—one skilled prompt writer can produce templates that scale across teams and geographies.
Critical assessment: what Sandvik did well — and where more caution is needed
Strengths
- Practical, human‑centered design: focusing on onboarding and training recognizes that AI’s first big win is often learning and knowledge transfer, not advanced analytics. The winning idea is deliberately low‑risk and potentially high‑impact.
- Vendor alignment: partnering with Microsoft for a masterclass is efficient; Copilot is already integrated into the apps employees use, making adoption friction low.
- Outcome orientation: judging ideas for scale and tangible value steers the program away from novelty and toward deployable pilots.
Gaps and cautions
- Governance detail: the public write‑up does not disclose the governance or DLP controls used during the Promptathon. For any event that uses internal documents or employee data, clearly defined DLP and audit trails are essential. Without that transparency, an internal pilot could create compliance risk. This is a cautionary gap to fix before large‑scale rollouts.
- Measurement plan: short events produce enthusiasm, but converting ideas into measurable ROI requires a disciplined follow‑through: pilot owners, KPIs, and measurement windows. The next phase must invest in those capabilities.
Looking ahead: how Promptathons will evolve into operational programs
Expect the following developments in organizations that adopt this model:- Prompt libraries as internal IP: central registries with versioning, owners and risk classification will become a normal part of IT assets.
- Human‑in‑the‑loop pipelines: repeated prompt patterns will be automated into agents that run under supervision, handing difficult judgments to humans while automating routine synthesis. This is the natural evolution from templates to guarded automation.
- Integration into L&D and hiring: prompt skills will appear in job descriptions for knowledge work and in competency frameworks for performance evaluation, because the ability to frame problems for AI will be a durable multiplier for many roles.
Conclusion
Sandvik’s Promptathon is not just a feel‑good innovation contest; it’s a tactical, teachable approach to lifting the AI competency of a large, distributed workforce. By combining vendor-led instruction, a clear four‑ingredient prompting framework, and a short, outcome‑centric contest format, Sandvik created both a learning moment and a pipeline of deployable ideas. The Wider lesson for IT leaders is simple: tools matter, but skills matter more. Organizations that invest in prompt literacy, governance and measurable pilots will turn AI from a novelty into a repeatable productivity advantage. For teams planning similar initiatives, the practical checklist is immediate: pick a constrained workflow, run a short masterclass plus build sprint, require template metadata and owner assignment, log everything, and measure before scaling. The hard work is not inventing a great prompt; it’s turning that prompt into a trusted, versioned asset with an owner, a measurement plan, and human verification baked in.The question for leaders is no longer whether AI will change work—but how quickly their people can learn to ask the right questions.
Source: Sandvik Group Inside the First Sandvik Promptathon: How AI Prompting Is Shaping the Future of Work