Blend Data and Intuition: HR in the AI Driven Workplace

  • Thread Author
Mukta Arya’s message is simple and urgent: human resources must marry data with intuition as artificial intelligence reshapes hiring, learning and workforce planning — not by handing judgment to algorithms, but by using models to surface signals that humans translate into context-rich decisions.

A businesswoman analyzes a holographic data dashboard titled 'Data and Intuition'.Background​

The interview published on Exchange4Media (originally from MartechAi) captures Mukta Arya’s practical view from the front lines of HR at Société Générale in the Asia‑Pacific region. Arya recounts a thirty‑year evolution from paper résumés to HR information systems to internal large language models and Microsoft Copilot, and she positions the CHRO as both a steward of people data and a guardian of the human judgment that must sit over automated outputs. That micro‑to‑macro shift in HR mirrors a broader, documented trend. Academic and industry research shows that predictive analytics and machine learning are now standard tools for tackling problems such as attrition forecasting, skills gap analysis and personalised learning recommendations. Recent peer‑reviewed work documents machine learning models predicting turnover and surfacing drivers such as workload, promotion cadence and job satisfaction — capabilities now being operationalised by enterprises that treat HR as a data discipline. At the same time, enterprise productivity platforms and vendor‑backed Copilot experiences have accelerated organizational deployments: insurers and banks are already integrating Microsoft Copilot or custom Copilots into workflows, and market announcements show large financial firms moving to Copilot‑style assistants and agent frameworks in production. These platform moves make Arya’s description of internal copilots and summarisation of long reports technically plausible and practically useful.

What Arya Actually Said — Nuts and Bolts from the Interview​

  • For Société Générale, AI work began years ago with an internal initiative dubbed SoGen AI, and the bank now runs internal GPT tooling (referred to in the interview as SoGPT) alongside Microsoft Copilot to support HR use cases like survey summarisation, content curation and labour‑law interpretation. Arya stresses that, for a bank, tenant‑grounded and private deployments are non‑negotiable because of confidentiality and regulatory constraints.
  • On analytics, Arya offered concrete examples: combining anonymous survey responses with operational signals (vacation usage, managers’ leave‑approval patterns, sick‑leave spikes) to identify wellbeing problems that numbers alone would not flag — then flagging those intersections to line managers for human follow‑up. That is a classic data‑plus‑storytelling pattern: analytics surfaces the signal; HR supplies the context.
  • On recruitment and screening, Arya acknowledged the convenience and scale gains of automated shortlists, while warning that filters can exclude non‑standard but excellent candidates; she emphasised checks and balance across the selection funnel (referrals, human interviews, headhunting) to avoid over‑reliance on a single algorithmic pass.
  • On jobs and teams, Arya does not predict immediate mass layoffs. Her view is transitional: tasks will be reshaped; new roles (AI oversight, prompt designers, model auditors) will appear; and many HR functions remain people‑intensive. The immediate priority for her organisation is experimentation and cautious scaling rather than wholesale headcount reductions.
These are important operational notes: the interview gives a practitioner’s view on how an international bank balances pilot projects, privacy constraints and the need for human judgement.

Why this matters: the enterprise and regulatory context​

Banks operate under high compliance and confidentiality constraints. Enterprise Copilot adoptions and partnerships show the technology is being engineered for these environments — Microsoft and partners have delivered Copilot rollouts in financial services and insurance that emphasise governance, tenant‑grounding and licensed data connectors. That industry momentum helps explain why institutions such as Société Générale would pursue internal GPTs and Copilot instances rather than rely on public consumer tools. At the same time, scientific literature and HR analytics reviews show predictable strengths and weaknesses of HR‑facing AI:
  • Strength: predictive models (XGBoost, random forest, transformer‑based approaches) can deliver robust early‑warning signals for attrition and highlight actionable drivers for retention programs.
  • Weakness: algorithms can encode historical bias, and opaque models raise fairness, explainability and legal risk if used in hiring or compensation decisions without human checkpoints. Industry playbooks therefore recommend independent fairness testing, human‑in‑the‑loop gating and audit trails.

The practical HR playbook Arya hints at — and what else practitioners should require​

The interview reads like a checklist for responsible, pragmatic HR AI adoption. Synthesising Arya’s comments with industry practice and governance guidance yields a practical plan HR leaders should follow now:
  • Convene a cross‑functional AI governance forum. Membership: HR, legal, IT/security, data science and employee representatives. Charter: define acceptable uses, forbidden autonomous actions, data classification and audit requirements. This is not rhetorical — firms that treat Copilots as platforms govern them as such.
  • Map HR systems and data sensitivity. Audit HRIS, ATS, LMS, payroll, benefits platforms and engagement survey stores. Classify each data flow before connecting to any LLM or co‑pilot. Prefer tenant‑grounded models for personnel records; enforce DLP on any prompt‑level logging.
  • Start with role‑based, measurement‑driven pilots. Define narrow, high‑frequency use cases (survey summarisation, manager dashboards, candidate triage). Run 6–12 week pilots with business owners, measure time saved and quality delta, and only scale after bias and accuracy checks pass.
  • Build learning‑in‑flow rather than one‑off courses. Micro‑learning, sandboxes and on‑the‑job projects (a “learn‑by‑doing” model) convert generic AI awareness into usable workflow skills. Link micro‑credentials to career mobility and protected learning hours to avoid learning burnout.
  • Require documented human verification on high‑impact outputs. Never allow unverified AI recommendations to drive hiring, firing, compensation or regulatory communications without a human sign‑off and a retained audit trail.
  • Use mixed evidence for measurement. Do not rely on raw Copilot or prompt counts as performance metrics. Combine telemetry with work samples, client feedback and outcome KPIs (rework rate, time saved, retention, promotion rates).
  • Define the pilot and metric (weeks 0–2) — choose specific outcome KPIs (e.g., reduce first‑response time to candidate enquiries by 40%).
  • Run an interdisciplinary pilot (weeks 3–14) — product owner, power users, IT/security, L&D.
  • Harden with governance (weeks 15+) — codify data rules, logging, human‑in‑the‑loop gating, fairness audits and an escalation path.
This staged approach converts pilots into responsible scale.

Corroborating the claims: what can be independently verified — and what cannot​

Verified and well‑supported claims:
  • Enterprise Copilot adoption in financial services and insurers is documented: Microsoft customer stories and recent market reports show insurers and banks piloting and rolling out Copilot and Copilot Studio agent frameworks for internal workflows and customer operations. This industry evidence supports Arya’s point that Copilot and enterprise agents are being used in regulated environments.
  • Predictive analytics for attrition is a mature research area with many published implementations showing machine learning models can surface turnover risk and feature importance that HR teams can act on. The scientific literature confirms both the capability and the need for human governance around model explanations.
  • HR best practice frameworks emphasise the blend of human judgement and data: governance, human‑in‑the‑loop sign‑offs for high‑impact decisions, fairness testing and measurement beyond adoption metrics are recommended across industry playbooks. These match the guidance Arya gives about balancing art and science.
Claims that are practitioner statements but not independently corroborated in public sources:
  • The specific internal program names Arya cites — SoGen AI and SoGPT — appear in the transcript of the interview, but independent public confirmations of those specific product names or their implementation details were not found in public press releases or partner announcements at the time of writing. That does not invalidate Arya’s statements — many firms run internal AI initiatives without public branding — but these particular names and implementation specifics should be treated as organization‑internal facts unless corroborated by Société Générale’s public communications or third‑party technical disclosures. Treat the names as direct claims from the interview and flag them as not independently verifiable from public sources.
Caveat: where an executive describes private, tenant‑grounded tooling and internal GPTs, public corroboration is often absent for confidentiality reasons. That’s a normal pattern in regulated industries, but journalism should label such statements as company claims and seek confirmation if the factual detail is central to an investigative claim.

Strengths in Arya’s position — what she gets right​

  • Emphasis on human judgement and storytelling: Arya’s insistence that HR professionals are “artists with a scientific mind” is a useful corrective to techno‑determinism. Analytics without narrative and managerial follow‑up produces dashboards, not change. Organisations that pair signal detection with managerial coaching see higher adoption and better outcomes.
  • Practical, incremental adoption: the move from pilots (candidate sourcing, learning curation, chatbots) to embedded systems is the right path for risk‑heavy sectors. The industry examples of Copilot and platform partners adopting a staged approach confirm this as a best practice.
  • Focus on measurement and fairness: Arya highlights the need to check analytics results and bust myths through transparent reporting (e.g., female progression in management). That emphasis on data‑driven transparency aligns with governance guidance and helps mitigate reputation and legal risk.

Risks and blind spots to watch for​

  • Over‑reliance on predictive signals can become prescriptive if HR uses model outputs to automate decisions without adequate auditability. Best practice requires explainability and documented decision logic for every flagged recommendation. Industry guidance repeatedly warns against allowing unverified model outputs to trigger high‑impact HR actions.
  • Measurement traps: raw usage telemetry (number of prompts or Copilot queries) is a weak proxy for value. If organisations tie compensation or promotion to adoption counts without verifying output quality, they create perverse incentives that drive speed over accuracy. The playbooks emphasise outcome KPIs not telemetry alone.
  • Equity and access: if employers fail to protect learning time, provide modern devices, or fund certifications, AI fluency will concentrate among privileged groups. HR must budget for equitable reskilling so AI does not widen internal inequalities. This is a commonly reported operational risk and a structural design choice.
  • Unverified organisational claims: where public confirmation of internal tooling is absent, journalists and other organisations should treat named internal products as company claims pending verification. This is especially important when specific product names or internal audit claims are used to demonstrate compliance.

A tested checklist HR leaders can act on this quarter​

  • Publish an “AI at work” policy that explains permitted uses, data handling, monitoring and employee rights.
  • Convene an HR‑led AI governance board with legal, security and frontline representation.
  • Run a 60‑day task audit to identify low‑risk, high‑frequency pilot candidates (recruitment triage, survey summarisation, manager support).
  • Launch one cohort pilot combining micro‑learning, sandbox practice and a live KPI (e.g., reduce recruiter time‑to‑fill by X%).
  • Protect learning time (recommendation: at least 4 hours/week per participant during pilot).
  • Require documented human verification for any decision affecting pay, promotion or employment status.
  • Start internal badging for core AI competencies (Promptcraft, Verification, Governance Basics) and link badges to mobility.
  • Define the outcome and metric.
  • Pilot with cross‑functional team.
  • Harden governance and scale responsibly.

Final analysis — the balance HR must strike​

Mukta Arya’s interview is useful because it comes from a pragmatic HR operator in a regulated, people‑centred industry. Her central thesis — that HR must blend data and intuition — is neither technophobic nor techno‑utopian. It calls for:
  • Data to surface patterns and make HR decisions auditable and outcome‑oriented. Evidence supports predictive analytics’ ability to provide early warning signals and personalise learning pathways.
  • Intuition and judgment to interpret signals in context, to preserve fairness, and to apply empathy and narrative in people decisions. Academic and industry playbooks emphasise that human oversight is a governance requirement, not optional.
The responsible path is hybrid: measure what matters, protect human review for high‑impact decisions, invest in role‑specific skills, and design governance that makes AI a productivity multiplier — not an undiagnosed black box driving people decisions.
A final practical note: while Arya names internal programs (SoGen AI, SoGPT) and reports real use cases inside Société Générale, those product names and detailed implementations are presently claims from the interview; independent public confirmation of those exact names was not located during research. That does not negate the substance of Arya’s recommendations, but it does change how an editor or policymaker should treat those specifics: as organisation‑level claims to be confirmed before using them as evidence of a sector‑wide standard.
In short: HR leaders who act now should do so by pairing careful pilots and measurable outcomes with robust governance and human oversight — because the next wave of HR tools will reward organisations that combine model‑driven signals with the deeply human skills of judgement, storytelling and empathy.

Source: Exchange4Media https://www.exchange4media.com/mark...a-arya-chro-apac-societe-generale-149983.html
 

Back
Top