• Thread Author
Every leader who’s rushed to “buy AI” and roll it out by fiat has learned the same lesson: technology without people is a cost, not an advantage.

Background: why the conversation matters now​

Generative AI is no longer an experimental sidebar for labs and startups — it’s being embedded in Microsoft 365, enterprise back-ends, low-code/no-code platforms and customer-facing systems. That acceleration has produced a classic organizational mismatch: executives feel pressure to show progress, procurement teams push for vendor solutions, and frontline teams face abrupt changes to how they do work. The result is a proliferation of stalled pilots, fractured ownership, and avoidable reputational risk. This is the problem Dr. Gleb Tsipursky described in his recent interviews and commentary: success with AI is more psychological than technical — it requires starting with people, not tech.
This article synthesizes the practical lessons from that argument, tests key technical claims against vendor commitments about data and privacy, and lays out an actionable, risk-aware playbook for leaders who must deploy AI in real organisations. Practical evidence from enterprise platforms and community discussion shows that a people-first, no-code pathway both lowers resistance and improves long-term adoption — but only when paired with rigorous governance and measurement.

Overview: the core claim — psychology over code​

At its simplest, the psychology-first thesis says:
  • People who will use the AI must be involved early; their workflows, mental models and fears should shape the solution.
  • No-code/low-code tools democratize creation and give employees ownership, turning AI from a mysterious “black box” into a controllable tool they can shape.
  • Leaders who ignore anxiety, trust, and change management will see projects stall or fail; those who invest in people see faster, safer adoption.
This is not a rhetorical point. It’s a pattern visible across industries: when solutions are purchased as closed vendor packages and imposed from the top, users feel sidelined and threatened; when solutions are built with the people who will use them, adoption is higher and errors decline. Practitioners and communities have repeatedly documented this human-centered pattern.

Case study (as reported): insurance claims and the “smart intern” metaphor​

Gleb’s example — an insurance firm that built a Copilot-based assistant for claims agents rather than buying a packaged vendor product — illustrates the principle.
  • Problem: claims agents navigated a sprawling set of policy documents, a time-consuming, error-prone task that lived in individual experience rather than a single system.
  • Approach: the firm used a no-code / Copilot approach to ingest the policy forms, craft clear query-language prompts, and let agents test and refine outputs themselves.
  • Outcome (reported): reduced time to find policy citations, fewer errors, and crucially reduced resistance because agents felt they had shaped the tool.
That outcome — turning AI into a trained assistant rather than a mysterious replacement — is the central behavioral win. However, the precise operational details reported in the interview (for example, the claim of “over 100 detailed policy forms” being ingested) are illustrative rather than independently verified; they should be treated as an indicative example rather than a rigorously documented trial. Leaders should want the narrative, but also demand the metrics and the audit trail before scaling similar patterns. Flag: case-study specifics reported in interviews can be valuable heuristics, but they are not a substitute for a reproducible pilot with telemetry and independent verification.

Why no-code and “citizen developer” pathways work — and where they don’t​

No-code’s strengths​

No-code and low-code platforms — Microsoft Power Platform (Power Apps, Power Automate, Power Virtual Agents), Copilot Studio, Zapier and others — make AI accessible to people who already know Excel, forms or basic logic. This lowers the learning curve and gives employees the ability to:
  • Prototype quickly and iterate based on real feedback.
  • Shape prompts and guardrails themselves, increasing psychological ownership.
  • Build contextual integrations specific to their workflows, which often fits better than a one-size-fits-all vendor package. (microsoft.com) (zapier.com)
These platforms are explicitly designed for “citizen developers” and boast measurable reductions in time-to-build for automations and chatbots when Copilot-like assistants are used. Microsoft’s Power Platform messaging and early evaluations report significant speedups in creating flows and apps when Copilot is applied to low-code tasks. (microsoft.com)

No-code’s limits and hidden risks​

No-code does not eliminate governance needs. Democratizing creation can produce:
  • Fragmented automations with overlapping responsibilities.
  • Shadow agents that touch sensitive data without appropriate logging or DLP controls.
  • Overconfident outputs from generative systems when used without validation steps.
For that reason, no-code must be accompanied by clear guardrails, centralized governance primitives, and lifecycle management: registries of agents, standardized testing/checklists, logging and audit trails, and escalation paths for when an assistant produces surprising or regulated outcomes. Community reporting and enterprise playbooks repeatedly call out these governance gaps as the main cause of rollouts that “stuck but failed safely” versus those that truly delivered value.

The technical claim every leader asks first: is my data safe?​

One of the most common sources of organisational anxiety is data handling: will a user prompt with sensitive information be absorbed into the vendor’s model training corpus? Short answer: for enterprise-grade offerings, major vendors have explicit commitments that customer data is not used to train public models by default. But the caveats matter.
  • Microsoft’s enterprise Copilot and Power Platform documentation state that prompts, responses and data accessed via Microsoft Graph are not used to train shared foundation models unless a tenant explicitly opts in — and that enterprise offerings use Azure OpenAI Service and other protections. They emphasise tenant isolation and enterprise-grade controls. (microsoft.com)
  • OpenAI’s enterprise statements (ChatGPT Enterprise, API business commitments) make a similar claim: business inputs and outputs are not used to improve models by default; customers can opt in for improvement if they choose. These pages emphasize encryption in transit and at rest and contractual commitments for enterprise customers. (openai.com)
At the same time, public confusion has occasionally arisen — and vendors have had to publicly clarify their commitments. Reporting from major outlets chronicled a media-fuelled confusion about Microsoft Office “connected experiences” and whether Office documents were being used for model training; Microsoft responded with denials and clarification. This episode shows how even technically correct vendor statements can be misread and erode trust if communications are poor. Leaders must therefore both choose enterprise plans and explain them clearly to employees. (reuters.com)
Practical rule: always purchase enterprise/tenant licences, review the vendor’s data residency and training clauses in the contract, and require vendor evidence of isolation (DPIAs, SOC reports, contractual data commitments). Do not assume consumer-grade offerings or free tiers provide equivalent guarantees.

Psychology of adoption: the anxieties to name and manage​

Generative AI introduces a distinct set of emotional and cognitive challenges to teams:
  • Fear of displacement: junior staff in particular see automation as a threat to future opportunity.
  • Loss of craft: employees worry the routinised decision-making that made them experts will be eroded.
  • Trust & hallucinations: generative models can confidently assert falsehoods — the classic “hallucination” problem — and employees worry about making or repeating errors.
  • Reputation risk: a public AI-generated error can cause real brand damage if customer-facing outputs are wrong.
Good leaders will name these anxieties explicitly and build mitigation strategies: transparent communication about intent, roles, reskilling pathways, and “human-in-the-loop” decision rules where the final human sign-off is required for any consequential output. Empirical discussions from practitioner communities and recent corporate playbooks emphasise training, transparent labeling, and easy escalation as high-impact measures.

A concrete adoption playbook: start with people, ship with governance​

Below is a practical, sequential blueprint based on behavioral science, enterprise tech capabilities, and lessons from early adopters.
  • Map the human workflows, not the software. Document who does what, who makes decisions, and where errors are costly.
  • Identify high-frequency, low-risk pilots. Choose tasks where AI can demonstrably speed work without legal/regulatory exposure (e.g., internal summarisation, draft responses, form lookups).
  • Use no-code tools for pilots. Enable frontline staff to co-design and iterate using Copilot in Power Platform, or controlled Zapier workflows to avoid vendor lock-in. (microsoft.com)
  • Define human checkpoints. For every automation, define where a human must review and accept results.
  • Enforce logging, DLP and lineage. All agent interactions must be auditable and tied to a responsible owner.
  • Measure both hard and soft metrics. Track time saved, error rates, and also employee confidence and perceived control.
  • Publish an internal “AI Charter.” Make it simple: what data is permitted in prompts, how to escalate, and what licensing the org uses.
  • Stage scaling only after independent audits of bias, security, and accuracy. Require vendor transparency on training/data usage before production expansion.
This staged approach puts people first while preserving the speed and innovation benefits of modern platforms. It transforms AI from an imposition into a capability employees can shape.

Governance checklist for IT and legal teams​

  • Contracts: insist on explicit language that business/tenant data will not be used to train public models unless explicitly opted-in.
  • Logging & telemetry: require full prompt and response audit trails for agents touching regulated data.
  • DLP & masking: set policies to block PII or financial data from being submitted to generative endpoints without approved safeguards.
  • Human-in-loop rules: codify which decisions require human approval (and how that approval is recorded).
  • Vendor assurance: require SOC 2/ISO certifications, and request DPIAs, red-team results and third-party audits for critical systems.
  • Training: run mandatory training on prompt privacy, hallucination risk and escalation protocols for every employee who uses AI tools.
Community experience and industry playbooks highlight these as minimum baseline controls — not optional extras.

The strategic upside — what leaders gain by doing this well​

When organisations pair people-first change management with appropriate technical controls, the payoff is tangible:
  • Faster decision-making at the front lines because employees can retrieve and synthesise institutional knowledge rapidly.
  • Lower onboarding friction for new staff, who can query agents that encapsulate institutional processes.
  • Better staff retention because employees who shape their tools feel more empowered and less threatened.
  • New roles and career paths (agent ops, prompt auditors, AI quality control) that raise the organisation’s digital maturity and create opportunities rather than only displacing work.

What to watch for — five red flags during pilot-to-scale​

  • Declining human verification: if teams stop checking AI outputs, cognitive atrophy and error cascades can follow.
  • Shadow agents: automations spawned without IT review touching sensitive data.
  • Vendor opacity: vendors unwilling to provide DPIAs, logs, or contractual training guarantees.
  • Measurement blindspots: focusing on “hours saved” without assessing error impact or customer satisfaction.
  • Poor communications: employees who do not understand the difference between consumer and enterprise AI offerings will default to risk-averse behaviour or unsafe workarounds. (theverge.com)
Any one of these red flags is a reason to pause scaling and remediate; taken together they are a reason to reset governance.

Looking forward: agents, autonomy and the next wave of psychological questions​

Leaders should prepare for a future where agents perform multi-step outcomes — not just single tasks. The psychological stakes rise when AI is capable of handling entire customer journeys: trust, accountability, escalation and anthropomorphism all become more salient.
  • Expect error rates to be higher in early multi-stage agents; error propagation is a real technical and reputational risk.
  • Plan for roles that manage agent networks: orchestration, auditing, and ethical oversight will be core operational competencies.
  • Preserve explainability: make sure agents record why they took each step and what data informed the choice.
The next wave will shift from “can we build it?” to “who should be allowed to let it decide?” — a fundamentally human question.

Conclusion: a practical synthesis for leaders​

Generative AI can be transformative — but the transformation is social as much as technical. The most reliable path to adoption is not to dash to the procurement table and “plug in” a vendor product; it is to design pilots with the people who will use the tools, to choose no-code pathways that allow frontline shaping, and to pair those pilots with hard governance: logging, DLP, contractual protections and measurement.
  • Start with people: map workflows, co-design with end-users, surface anxieties and address them directly.
  • Use no-code to build ownership, but govern like code: require audit trails, human checkpoints and vendor assurances. (microsoft.com)
  • Validate technical claims: purchase enterprise-grade licences, and insist that vendors demonstrate data isolation commitments in writing. (openai.com)
Treat AI like a trained intern: useful, fast, and capable — but requiring supervision, context, and ongoing training. That framing reduces fear, increases accountability, and turns a potentially disruptive technology into a durable competitive advantage.
Leaders who internalize these principles will be the ones who move beyond pilots into reliable, scalable, and human-centric AI at work.

Source: The Irish Independent Gina London: The psychology of AI is fairly simple – you start with people, not technology