The AI era has a people problem: organisations can buy the smartest models and the fastest GPUs, but without new kinds of managers to translate, coach and protect the workforce, those investments will deliver far less value than expected and may damage morale and retention in the process.
The technology headlines are unmistakable: enterprise AI is moving from pilot projects to platform bets, and the capital required to support that shift is measured in trillions. McKinsey’s modelling of data‑centre and compute demand finds that by 2030 global data‑centre investment could require roughly $6.7 trillion (with AI workloads accounting for the lion’s share). Public comments from major industry leaders echo the scale and urgency of that investment case — Nvidia’s leadership has repeatedly framed the build‑out as a multi‑trillion‑dollar shift in infrastructure demand. Yet money and compute are only half the story. The other half is human: employees who must adopt, experiment with, and responsibly operate these AI tools. Many organisations now talk about “superworkers” — employees who use AI to amplify their productivity and creativity — and with that promise comes a new imperative for people leadership. The emerging thesis is simple: to create superworkers at scale you need “supermanagers” — managers who combine human‑centred leadership with AI fluency, transparency practices and change‑management skills. This is not a small pivot in job descriptions; it is a management redefinition that will determine whether AI investments convert into measurable business outcomes.
Managers remain the human interface to change. The skills that defined effective managers in the pre‑AI era — coaching, listening, prioritisation, and psychological safety creation — are still essential. What changes is the mix of skills: managers must be able to explain what an AI tool does and does not do, interpret its outputs for a given business context, set guardrails, and create safe spaces for controlled experimentation. In short, they must become translators and stewards of responsible AI adoption.
Clifford Chance’s public rollout is a clear example: the firm combined an internal, private AI assistant with an enterprise deployment of Microsoft 365 Copilot and the Viva suite — and wrapped the rollout in a firm‑wide AI Principles and Policy framework that emphasises fairness, accountability and client consent for AI‑driven outputs. That kind of deliberate governance and user communication is central to building trust. Microsoft itself frames employee experience platforms (EXP) like Viva as experiential layers through which organisations can make AI visible, explain its uses, and create communities of practice for learning and sharing. These platforms allow organisations to publish guidance, host Copilot adoption communities, and measure usage — giving managers the signals they need to coach teams in real time. ASOS and other retail and consumer brands have also been cited in broader case studies for leveraging Viva and Azure tools to create internal communities and learning pathways that demystify AI for frontline teams — although the depth and public detail of those programmes vary by organisation.
Supermanagers use these signals to act proactively: clarifying use rules, running short demos, connecting teams to IT and security, and orchestrating role‑specific training. The equilibrium the organisation seeks is not data alone but data plus human judgement — a supervised loop in which managers interpret analytics and take culturally sensitive action.
The core insight is deceptively simple: the ROI of enterprise AI will be determined less by model sophistication and more by whether organisations have the managerial capability to integrate AI into daily work transparently and humanely. Supermanagers are the linchpin of that integration. They translate policies into context, turn platform telemetry into coaching opportunities, and make psychological safety a practice rather than an aspiration.
Organisations that succeed with AI will not be those that acquire the most advanced models, but those that cultivate the right leadership culture. Build managers who can be curious, candid and technically literate; make transparency the default; measure impact not activity; and create the psychological safety that turns fear into experimentation. Superworkers require supermanagers — and the cost of getting that wrong is not just wasted capital, but lost trust and lost talent.
Source: HRM Asia Want a superworker company? Create supermanagers - HRM Asia
Background / Overview
The technology headlines are unmistakable: enterprise AI is moving from pilot projects to platform bets, and the capital required to support that shift is measured in trillions. McKinsey’s modelling of data‑centre and compute demand finds that by 2030 global data‑centre investment could require roughly $6.7 trillion (with AI workloads accounting for the lion’s share). Public comments from major industry leaders echo the scale and urgency of that investment case — Nvidia’s leadership has repeatedly framed the build‑out as a multi‑trillion‑dollar shift in infrastructure demand. Yet money and compute are only half the story. The other half is human: employees who must adopt, experiment with, and responsibly operate these AI tools. Many organisations now talk about “superworkers” — employees who use AI to amplify their productivity and creativity — and with that promise comes a new imperative for people leadership. The emerging thesis is simple: to create superworkers at scale you need “supermanagers” — managers who combine human‑centred leadership with AI fluency, transparency practices and change‑management skills. This is not a small pivot in job descriptions; it is a management redefinition that will determine whether AI investments convert into measurable business outcomes.Why managers — not machines — still matter
The myth of managerless AI adoption
There is a persistent myth that AI will simply eliminate middle management. In practice, organisations that try to scale AI without investing in managerial capability run into predictable problems: low adoption, shadow‑AI usage, and moral panic when people fear their jobs are at risk. Research and practitioner reporting show that when employees don’t understand how AI is being used — or believe decisions are being made by a “faceless algorithm” — trust collapses, experimentation halts and the technology is underused or misused.Managers remain the human interface to change. The skills that defined effective managers in the pre‑AI era — coaching, listening, prioritisation, and psychological safety creation — are still essential. What changes is the mix of skills: managers must be able to explain what an AI tool does and does not do, interpret its outputs for a given business context, set guardrails, and create safe spaces for controlled experimentation. In short, they must become translators and stewards of responsible AI adoption.
What makes a “supermanager”
A supermanager is not simply a manager who knows a few prompts. They are a hybrid leader who:- Combines AI fluency (understanding capabilities, risks, data inputs and limitations) with people leadership.
- Makes AI adoption transparent and accountable — explaining the why, the how, and the what‑if to their teams.
- Embeds continuous learning into day‑to‑day work: encouraging experimentation, sharing failures, and curating reusable prompts and templates.
- Uses AI as a productivity multiplier for the team — redesigning workflows so humans and models amplify one another’s strengths.
Evidence and real‑world examples: transparency, platforms and culture
Transparency as trust infrastructure
Transparency about AI deployments is the single most replicable lever leaders can use to reduce anxiety and encourage safe experimentation. Leading organisations are publishing AI principles, policies and practical guidance for employees that spell out what systems can — and cannot — do, what data they use, and what protections are in place.Clifford Chance’s public rollout is a clear example: the firm combined an internal, private AI assistant with an enterprise deployment of Microsoft 365 Copilot and the Viva suite — and wrapped the rollout in a firm‑wide AI Principles and Policy framework that emphasises fairness, accountability and client consent for AI‑driven outputs. That kind of deliberate governance and user communication is central to building trust. Microsoft itself frames employee experience platforms (EXP) like Viva as experiential layers through which organisations can make AI visible, explain its uses, and create communities of practice for learning and sharing. These platforms allow organisations to publish guidance, host Copilot adoption communities, and measure usage — giving managers the signals they need to coach teams in real time. ASOS and other retail and consumer brands have also been cited in broader case studies for leveraging Viva and Azure tools to create internal communities and learning pathways that demystify AI for frontline teams — although the depth and public detail of those programmes vary by organisation.
What the evidence says about scale and outcomes
Independent industry modelling and reporting converge on one point: the infrastructure and operating investment required to scale AI in enterprise settings is enormous and will continue to grow. McKinsey’s 2030 scenarios show multiple plausible trajectories — from a constrained $3.7 trillion data‑centre capex outcome to $7.9 trillion in a high‑growth scenario — with a central case near $5.2–$6.7 trillion for AI‑capable infrastructure by 2030. That range underscores both the scale of the opportunity and the uncertainty of timing. Industry commentary and executive statements (including from chipmakers and hyperscalers) reinforce the message: large‑scale, sustained capital allocation is underway and will require not only hardware but also organisational change to capture value.The human side: anxiety, adoption gaps and the trust deficit
A fragmented picture of employee sentiment
Surveys paint a mixed picture on how worried employees are about automation. Numbers vary by survey methodology, region and role. Some recent manager‑focused research finds that a majority of employees report worry about AI’s effect on their job security — figures in the 50–60% range appear in multiple 2024–2025 studies — while other studies find lower concern once workers are reskilled or see clear upskilling pathways. The difference often comes down to how questions are asked and who is surveyed. The upshot for leaders is simple: anxiety is real and material even if its exact magnitude fluctuates by study. Organisations cannot treat fear as an abstract statistic; they must address it through concrete, manager‑led practices that communicate intent, protect privacy, and offer practical upskilling pathways.Listening, measuring and nudging in real time
Traditional annual engagement surveys are inadequate for AI adoption cycles that move at product‑release speed. The good news is that the same AI and analytics capabilities being deployed in production also make continuous listening possible: sentiment analysis, pulse surveys, and behaviour analytics can flag when teams are confused, fearful or experimenting safely.Supermanagers use these signals to act proactively: clarifying use rules, running short demos, connecting teams to IT and security, and orchestrating role‑specific training. The equilibrium the organisation seeks is not data alone but data plus human judgement — a supervised loop in which managers interpret analytics and take culturally sensitive action.
How to build supermanagers: a pragmatic playbook
1. Redefine manager competencies
Managers must master a new competency set that blends people leadership, AI literacy and ethical judgement. Practical steps:- Add explicit AI‑fluency objectives to manager job descriptions and performance plans.
- Require completion of short, practical modules: Copilot basics, data handling and model limitations, and how to coach teams on prompt design and reuse.
- Create a curriculum for scenario‑based coaching: how to evaluate an AI output, when to escalate to legal or compliance, and how to co‑author work with a model.
2. Lead by example — manager adoption matters
Managers who publicly use AI tools and share both successes and failures create permission structures for their teams to experiment. Case studies show that early manager adoption helps normalise adoption for employees, reduces shadow‑use, and facilitates the sharing of reusable templates and workflows. Rewarding managers for documented team wins (impact + reuse) rather than raw usage counts will protect quality over quantity.3. Make transparency operational
Transparency isn’t just a policy document: it’s a set of operational practices.- Publish plain‑language guides showing what AI is used for, what data it sees, and where outputs must be reviewed.
- Instrument Copilot/Viva use cases with audit logs and explainability notes managers can read and discuss with teams.
- Build internal FAQ hubs and adoption communities where employees can ask questions and see adoption examples in context.
4. Measure what matters
Move from vanity metrics (number of prompts) to impact metrics: time saved for specific tasks, error/exception rates, reuse of validated prompts, and employee learning outcomes. Design incentive schemes that reward quality (e.g., reusable prompts that cut task time by 20% and are peer‑validated) rather than raw volume. These measures help managers make resource cases and iterate on adoption.5. Build governance into everyday workflows
Operational governance includes clear human‑in‑the‑loop policies, escalation points for high‑risk outputs, data classification guards, and a “no training” carve‑out for sensitive corpora. Managers must be able to translate these controls into team rules of engagement and ensure compliance is not an external audit play but a daily habit.Risks, blind spots and practical caveats
- Infrastructure numbers are scenario dependent. Public estimates of multi‑trillion dollar AI infrastructure build‑outs (McKinsey, Brookfield, executive statements) reflect different methodologies and timelines. Treat headline figures as directional — large and non‑linear — rather than precise five‑year guarantees.
- Survey figures on employee fear vary. Different surveys produce different percentages. A single figure (for example, “58% fear automation”) is credible within the context of certain samples, but it is not a universal truth; leaders should rely on their internal, high‑frequency listening to understand how their own people feel.
- Vendor case studies can be optimistic. Many published outcomes come from well‑supported pilots with vendor engineering support. Independent audit and reproducibility matter. Where numbers will drive decisions (cutting roles, shifting budgets) demand independent verification.
- Over‑rewarding raw usage backfires. Incentives tied to simple counts (prompts issued) create perverse behaviours. Well‑designed incentive programmes reward impact, reuse and shared learning.
- Energy and resilience are real constraints. Scaling AI is not only a software problem — power, cooling and data centre readiness are practical bottlenecks that shape where and how organisations can deploy large models. Capital alone does not equal readiness.
A final imperative for HR leaders and CIOs
AI transformation is not primarily a technology problem; it is an organisational one. HR sits at the centre of that work: designing manager competencies, building learning pathways, aligning incentives, and ensuring a humane transition for people whose tasks will change.The core insight is deceptively simple: the ROI of enterprise AI will be determined less by model sophistication and more by whether organisations have the managerial capability to integrate AI into daily work transparently and humanely. Supermanagers are the linchpin of that integration. They translate policies into context, turn platform telemetry into coaching opportunities, and make psychological safety a practice rather than an aspiration.
Practical checklist for the next 90 days
- Publish a single‑page AI adoption playbook for people leaders that covers permitted tools, data handling, escalation points, and where to log reusable prompts.
- Update manager job descriptions to include AI‑fluency goals and a short completion list of practical micro‑courses.
- Launch a Copilot/Viva adoption community for managers to share before/after metrics, validated prompts and peer coaching notes.
- Replace one annual survey with weekly or biweekly pulse checks and ask managers to action at least one team experiment from the pulse insights.
- Design one incentive tied to impact (e.g., team prompt that reduces a standard task by 20% and is adopted by two other teams).
Organisations that succeed with AI will not be those that acquire the most advanced models, but those that cultivate the right leadership culture. Build managers who can be curious, candid and technically literate; make transparency the default; measure impact not activity; and create the psychological safety that turns fear into experimentation. Superworkers require supermanagers — and the cost of getting that wrong is not just wasted capital, but lost trust and lost talent.
Source: HRM Asia Want a superworker company? Create supermanagers - HRM Asia