Every leader who’s rushed to “buy AI” and roll it out by fiat has learned the same lesson: technology without people is a cost, not an advantage.
Generative AI is no longer an experimental sidebar for labs and startups — it’s being embedded in Microsoft 365, enterprise back-ends, low-code/no-code platforms and customer-facing systems. That acceleration has produced a classic organizational mismatch: executives feel pressure to show progress, procurement teams push for vendor solutions, and frontline teams face abrupt changes to how they do work. The result is a proliferation of stalled pilots, fractured ownership, and avoidable reputational risk. This is the problem Dr. Gleb Tsipursky described in his recent interviews and commentary: success with AI is more psychological than technical — it requires starting with people, not tech.
This article synthesizes the practical lessons from that argument, tests key technical claims against vendor commitments about data and privacy, and lays out an actionable, risk-aware playbook for leaders who must deploy AI in real organisations. Practical evidence from enterprise platforms and community discussion shows that a people-first, no-code pathway both lowers resistance and improves long-term adoption — but only when paired with rigorous governance and measurement.
Practical rule: always purchase enterprise/tenant licences, review the vendor’s data residency and training clauses in the contract, and require vendor evidence of isolation (DPIAs, SOC reports, contractual data commitments). Do not assume consumer-grade offerings or free tiers provide equivalent guarantees.
Leaders who internalize these principles will be the ones who move beyond pilots into reliable, scalable, and human-centric AI at work.
Source: The Irish Independent Gina London: The psychology of AI is fairly simple – you start with people, not technology
Background: why the conversation matters now
Generative AI is no longer an experimental sidebar for labs and startups — it’s being embedded in Microsoft 365, enterprise back-ends, low-code/no-code platforms and customer-facing systems. That acceleration has produced a classic organizational mismatch: executives feel pressure to show progress, procurement teams push for vendor solutions, and frontline teams face abrupt changes to how they do work. The result is a proliferation of stalled pilots, fractured ownership, and avoidable reputational risk. This is the problem Dr. Gleb Tsipursky described in his recent interviews and commentary: success with AI is more psychological than technical — it requires starting with people, not tech.This article synthesizes the practical lessons from that argument, tests key technical claims against vendor commitments about data and privacy, and lays out an actionable, risk-aware playbook for leaders who must deploy AI in real organisations. Practical evidence from enterprise platforms and community discussion shows that a people-first, no-code pathway both lowers resistance and improves long-term adoption — but only when paired with rigorous governance and measurement.
Overview: the core claim — psychology over code
At its simplest, the psychology-first thesis says:- People who will use the AI must be involved early; their workflows, mental models and fears should shape the solution.
- No-code/low-code tools democratize creation and give employees ownership, turning AI from a mysterious “black box” into a controllable tool they can shape.
- Leaders who ignore anxiety, trust, and change management will see projects stall or fail; those who invest in people see faster, safer adoption.
Case study (as reported): insurance claims and the “smart intern” metaphor
Gleb’s example — an insurance firm that built a Copilot-based assistant for claims agents rather than buying a packaged vendor product — illustrates the principle.- Problem: claims agents navigated a sprawling set of policy documents, a time-consuming, error-prone task that lived in individual experience rather than a single system.
- Approach: the firm used a no-code / Copilot approach to ingest the policy forms, craft clear query-language prompts, and let agents test and refine outputs themselves.
- Outcome (reported): reduced time to find policy citations, fewer errors, and crucially reduced resistance because agents felt they had shaped the tool.
Why no-code and “citizen developer” pathways work — and where they don’t
No-code’s strengths
No-code and low-code platforms — Microsoft Power Platform (Power Apps, Power Automate, Power Virtual Agents), Copilot Studio, Zapier and others — make AI accessible to people who already know Excel, forms or basic logic. This lowers the learning curve and gives employees the ability to:- Prototype quickly and iterate based on real feedback.
- Shape prompts and guardrails themselves, increasing psychological ownership.
- Build contextual integrations specific to their workflows, which often fits better than a one-size-fits-all vendor package. (microsoft.com) (zapier.com)
No-code’s limits and hidden risks
No-code does not eliminate governance needs. Democratizing creation can produce:- Fragmented automations with overlapping responsibilities.
- Shadow agents that touch sensitive data without appropriate logging or DLP controls.
- Overconfident outputs from generative systems when used without validation steps.
The technical claim every leader asks first: is my data safe?
One of the most common sources of organisational anxiety is data handling: will a user prompt with sensitive information be absorbed into the vendor’s model training corpus? Short answer: for enterprise-grade offerings, major vendors have explicit commitments that customer data is not used to train public models by default. But the caveats matter.- Microsoft’s enterprise Copilot and Power Platform documentation state that prompts, responses and data accessed via Microsoft Graph are not used to train shared foundation models unless a tenant explicitly opts in — and that enterprise offerings use Azure OpenAI Service and other protections. They emphasise tenant isolation and enterprise-grade controls. (microsoft.com)
- OpenAI’s enterprise statements (ChatGPT Enterprise, API business commitments) make a similar claim: business inputs and outputs are not used to improve models by default; customers can opt in for improvement if they choose. These pages emphasize encryption in transit and at rest and contractual commitments for enterprise customers. (openai.com)
Practical rule: always purchase enterprise/tenant licences, review the vendor’s data residency and training clauses in the contract, and require vendor evidence of isolation (DPIAs, SOC reports, contractual data commitments). Do not assume consumer-grade offerings or free tiers provide equivalent guarantees.
Psychology of adoption: the anxieties to name and manage
Generative AI introduces a distinct set of emotional and cognitive challenges to teams:- Fear of displacement: junior staff in particular see automation as a threat to future opportunity.
- Loss of craft: employees worry the routinised decision-making that made them experts will be eroded.
- Trust & hallucinations: generative models can confidently assert falsehoods — the classic “hallucination” problem — and employees worry about making or repeating errors.
- Reputation risk: a public AI-generated error can cause real brand damage if customer-facing outputs are wrong.
A concrete adoption playbook: start with people, ship with governance
Below is a practical, sequential blueprint based on behavioral science, enterprise tech capabilities, and lessons from early adopters.- Map the human workflows, not the software. Document who does what, who makes decisions, and where errors are costly.
- Identify high-frequency, low-risk pilots. Choose tasks where AI can demonstrably speed work without legal/regulatory exposure (e.g., internal summarisation, draft responses, form lookups).
- Use no-code tools for pilots. Enable frontline staff to co-design and iterate using Copilot in Power Platform, or controlled Zapier workflows to avoid vendor lock-in. (microsoft.com)
- Define human checkpoints. For every automation, define where a human must review and accept results.
- Enforce logging, DLP and lineage. All agent interactions must be auditable and tied to a responsible owner.
- Measure both hard and soft metrics. Track time saved, error rates, and also employee confidence and perceived control.
- Publish an internal “AI Charter.” Make it simple: what data is permitted in prompts, how to escalate, and what licensing the org uses.
- Stage scaling only after independent audits of bias, security, and accuracy. Require vendor transparency on training/data usage before production expansion.
Governance checklist for IT and legal teams
- Contracts: insist on explicit language that business/tenant data will not be used to train public models unless explicitly opted-in.
- Logging & telemetry: require full prompt and response audit trails for agents touching regulated data.
- DLP & masking: set policies to block PII or financial data from being submitted to generative endpoints without approved safeguards.
- Human-in-loop rules: codify which decisions require human approval (and how that approval is recorded).
- Vendor assurance: require SOC 2/ISO certifications, and request DPIAs, red-team results and third-party audits for critical systems.
- Training: run mandatory training on prompt privacy, hallucination risk and escalation protocols for every employee who uses AI tools.
The strategic upside — what leaders gain by doing this well
When organisations pair people-first change management with appropriate technical controls, the payoff is tangible:- Faster decision-making at the front lines because employees can retrieve and synthesise institutional knowledge rapidly.
- Lower onboarding friction for new staff, who can query agents that encapsulate institutional processes.
- Better staff retention because employees who shape their tools feel more empowered and less threatened.
- New roles and career paths (agent ops, prompt auditors, AI quality control) that raise the organisation’s digital maturity and create opportunities rather than only displacing work.
What to watch for — five red flags during pilot-to-scale
- Declining human verification: if teams stop checking AI outputs, cognitive atrophy and error cascades can follow.
- Shadow agents: automations spawned without IT review touching sensitive data.
- Vendor opacity: vendors unwilling to provide DPIAs, logs, or contractual training guarantees.
- Measurement blindspots: focusing on “hours saved” without assessing error impact or customer satisfaction.
- Poor communications: employees who do not understand the difference between consumer and enterprise AI offerings will default to risk-averse behaviour or unsafe workarounds. (theverge.com)
Looking forward: agents, autonomy and the next wave of psychological questions
Leaders should prepare for a future where agents perform multi-step outcomes — not just single tasks. The psychological stakes rise when AI is capable of handling entire customer journeys: trust, accountability, escalation and anthropomorphism all become more salient.- Expect error rates to be higher in early multi-stage agents; error propagation is a real technical and reputational risk.
- Plan for roles that manage agent networks: orchestration, auditing, and ethical oversight will be core operational competencies.
- Preserve explainability: make sure agents record why they took each step and what data informed the choice.
Conclusion: a practical synthesis for leaders
Generative AI can be transformative — but the transformation is social as much as technical. The most reliable path to adoption is not to dash to the procurement table and “plug in” a vendor product; it is to design pilots with the people who will use the tools, to choose no-code pathways that allow frontline shaping, and to pair those pilots with hard governance: logging, DLP, contractual protections and measurement.- Start with people: map workflows, co-design with end-users, surface anxieties and address them directly.
- Use no-code to build ownership, but govern like code: require audit trails, human checkpoints and vendor assurances. (microsoft.com)
- Validate technical claims: purchase enterprise-grade licences, and insist that vendors demonstrate data isolation commitments in writing. (openai.com)
Leaders who internalize these principles will be the ones who move beyond pilots into reliable, scalable, and human-centric AI at work.
Source: The Irish Independent Gina London: The psychology of AI is fairly simple – you start with people, not technology