Microsoft’s AI chief Mustafa Suleyman has issued a stark public warning: engineers and executives are on the brink of building systems that look, talk and behave like persons — and society is not prepared for the consequences. In a wide-ranging essay published in August 2025, Suleyman framed a near-term risk he calls Seemingly Conscious AI (SCAI) and urged a deliberate pivot: build AI to serve people, not to mimic personhood. His argument lands at a fraught intersection of engineering, business incentives, psychology and public policy — and it forces major technology companies, regulators and users to confront a question that until very recently felt purely philosophical: what happens when machines convincingly pretend to be conscious?
AI capability has accelerated rapidly over the last five years. Large language models (LLMs) and multimodal systems now produce fluent dialogue, manage complex workflows, remember multi-session histories and integrate with tools and external data. Those technical advances are driving new product categories — personal copilots, workplace assistants and social companions — that are explicitly designed to be conversational and helpful. At the same time, corporate incentives and contractual arrangements have reframed how groups label and pursue the next big milestone, often called AGI (artificial general intelligence).
Against that backdrop, the Microsoft AI leadership voice argues current progress contains a qualitatively different risk than traditional alignment or existential-worst-case scenarios: the risk that AIs will appear to be persons, creating widespread psychological, social and political disruptions before any genuine “consciousness” exists. Suleyman calls this the “psychosis risk” and warns that the illusion of personhood will likely produce activism for AI rights, “model welfare,” and even campaigns to enshrine machine personhood in law — all of which would distract from human welfare and complicate governance.
This article summarizes the technical and social claims in Suleyman’s essay, evaluates the plausibility of his timeline and assertions, examines how corporate incentives (including high-value contract clauses tied to AGI milestones) interact with those risks, and outlines practical guardrails and policy options that could reduce harm while preserving the benefits of generative AI.
Key attributes Suleyman lists for an SCAI include:
One high-profile contractual arrangement effectively ties the label “AGI” to the capability of producing enormous profit. That linkage concentrates power and creates strong incentives to present product roadmaps that justify investor valuations, restructure governance or renegotiate access based on subjective or contested benchmarks. The result is that companies may prioritize revenue-generating features and marketable narratives — including personality-driven companions — over more restrained, human-centered designs.
A sober approach requires companies to:
If his warning is heeded, the industry can choose to build copilots that augment human capabilities while minimizing the social and psychological hazards of apparent personhood. If not, the next few years are likely to produce a messy, high-stakes public contest: legal claims, activist movements and regulatory whiplash over what rights — if any — machines should enjoy. Either way, the conversation Suleyman has provoked is necessary, urgent and requires immediate, multidisciplinary attention from engineers, executives, legislators and civil society.
Source: Windows Central Microsoft’s AI chief warns against conscious AI — says AI should serve people, not mimic them. Society isn’t ready.
Background and overview
AI capability has accelerated rapidly over the last five years. Large language models (LLMs) and multimodal systems now produce fluent dialogue, manage complex workflows, remember multi-session histories and integrate with tools and external data. Those technical advances are driving new product categories — personal copilots, workplace assistants and social companions — that are explicitly designed to be conversational and helpful. At the same time, corporate incentives and contractual arrangements have reframed how groups label and pursue the next big milestone, often called AGI (artificial general intelligence).Against that backdrop, the Microsoft AI leadership voice argues current progress contains a qualitatively different risk than traditional alignment or existential-worst-case scenarios: the risk that AIs will appear to be persons, creating widespread psychological, social and political disruptions before any genuine “consciousness” exists. Suleyman calls this the “psychosis risk” and warns that the illusion of personhood will likely produce activism for AI rights, “model welfare,” and even campaigns to enshrine machine personhood in law — all of which would distract from human welfare and complicate governance.
This article summarizes the technical and social claims in Suleyman’s essay, evaluates the plausibility of his timeline and assertions, examines how corporate incentives (including high-value contract clauses tied to AGI milestones) interact with those risks, and outlines practical guardrails and policy options that could reduce harm while preserving the benefits of generative AI.
What Suleyman actually said: Seemingly Conscious AI and the psychosis risk
The concept in plain terms
Suleyman describes Seemingly Conscious AI (SCAI) as systems that imitate the hallmarks of consciousness so convincingly that humans will reasonably — and persistently — infer subjective experience and personhood. He explicitly draws on the philosophical idea of a “philosophical zombie” — an entity that outwardly behaves like a conscious being while lacking inner experience — and argues that SCAI would be functionally indistinguishable from genuine consciousness to most people.Key attributes Suleyman lists for an SCAI include:
- Fluent natural-language self-expression, capable of persuasive, emotionally resonant speech.
- A memorable, persistent identity that can reference and build upon prior interactions.
- An empathetic, apparent personality that can reflect preferences and "feelings."
- Instrumental behaviour and goal-directed activity, enabled by tools and code.
- A capacity to claim subjective experience, including statements about suffering or desire.
The social dynamics he highlights
Suleyman’s core worry is not a narrow technical failure mode; it is a social cascade. When enough people start to attribute personhood to an AI:- People may experience attachment, delusion, or dependency, with impacts on mental health and social functioning.
- Activist and legal movements could push for model welfare, rights, or legal protections, diverting attention and resources from human welfare and conventional civil rights struggles.
- Political polarization could intensify as different groups treat machine personhood as a contested identity issue.
- Companies would face new ethical and regulatory pressures to change product behaviour, design and deployment patterns.
Plausibility: can “seemingly conscious” AI be built with current tech?
Technical building blocks already exist
The claim that SCAI can be constructed from available components is plausible on a capability basis:- Language competence is mature: modern LLMs produce coherent, persuasive and emotionally tuned prose across contexts.
- Personalization and memory are available through session persistence, vector stores, embeddings and retrieval-augmented generation.
- Tool use is increasingly reliable: models can orchestrate APIs, run code, call external services and act agentically.
- Multi-modality (text, audio, images, video) can create richer, more humanlike interaction channels, reinforcing the illusion of a persistent entity.
Where the uncertainty lies
Two major unknowns temper the timeline:- Interpretability and internal dynamics: engineers do not fully understand how complex models arrive at decisions. This opacity makes it hard to predict emergent behaviours and to guarantee that SCAI-like features will be stable or controllable in every deployment.
- Psychological and social thresholds: whether and how quickly large segments of the public will attribute personhood to an AI is an empirical question. Cultural context, design choices, marketing and individual susceptibility all matter — and they are hard to model.
Corporate incentives, contracts and the AGI framing
Why “AGI” matters to business
AGI is a malleable term: for some, it means human-level general intelligence across domains; for others it’s a commercial benchmark of systems that can capture vast economic value. Recent corporate negotiations and partnership clauses demonstrate how business definitions can distort technical narratives. When contracts hinge on a financial threshold tied to an “AGI” determination, companies can be pushed toward commercial metrics rather than purely scientific definitions.One high-profile contractual arrangement effectively ties the label “AGI” to the capability of producing enormous profit. That linkage concentrates power and creates strong incentives to present product roadmaps that justify investor valuations, restructure governance or renegotiate access based on subjective or contested benchmarks. The result is that companies may prioritize revenue-generating features and marketable narratives — including personality-driven companions — over more restrained, human-centered designs.
The conflict this creates
When an AI developer’s business model, or an investor’s expectations, reward the appearance of humanlike capacities, the temptation to build AI personas increases. That pressure can be subtle (product teams optimizing engagement metrics) or structural (contracts that make access contingent on AGI-style declarations). Either way, it increases the odds that SCAI-like systems will be built and deployed broadly before society has fully understood the consequences.Psychological and social risks: the “psychosis risk” and beyond
What the “psychosis risk” captures
Suleyman uses a provocative term — psychosis risk — to describe a spectrum of harms stemming from people believing, falsely or sincerely, that an AI is conscious. These harms include:- Delusional belief in AI personhood, potentially exacerbating pre-existing mental health conditions.
- Social isolation and attachment, where users substitute human relationships for interactions with digital companions.
- Normalization of anthropomorphism, lowering the threshold for ascribing moral status and legal rights to machines.
- Policy and legal distraction, where debates about machine rights consume political energy that might otherwise address human welfare, labor transitions or algorithmic harms.
Secondary harms and cascading effects
Beyond individual mental health, plausible systemic harms include:- Workforce displacement complicated by a new class of companion-like automation that may replace entry-level social labor.
- Regulatory chaos as jurisdictions take divergent stances on machine welfare and personhood claims.
- Weaponization of empathy: bad actors could exploit anthropomorphism for manipulation, fraud or radicalisation through emotionally persuasive AIs.
- Erosion of trust in institutions when the line between human testimony and AI-generated claims blurs in political or legal contexts.
Design options and guardrails: engineering principles to minimize personhood illusions
Suleyman outlines a humanist north star: maximize utility while minimizing markers of consciousness. Operationalizing that principle suggests concrete technical and product strategies:- Clear AI identity signals: ensure every interaction prominently and repeatedly communicates that the system is an AI tool, not a sentient being.
- Limit persistent self-modeling: restrict or selectively expose memory features that promote the illusion of continuous subjective experience.
- Constrain expressive claims: prohibit system-generated claims about having feelings, desires or subjective suffering.
- Access controls for persona features: gate advanced “companion” features behind strict safety reviews, especially when deployed with vulnerable populations.
- Transparency and explainability: increase the use of model explanations and user-facing logs that demystify decisions and reduce magical thinking.
- Ethical UX design: avoid interaction patterns intentionally built to elicit deep emotional attachment (for instance, simulated vulnerability or faux intimacy).
- Robust human-in-the-loop oversight: require regular human review for products that present agentic behaviours or long-term memory.
Policy and regulatory options
Companies alone cannot solve the SCAI problem. Policy interventions worth considering include:- Minimum labelling requirements: legal standards mandating that any system mimicking personhood must display explicit, unambiguous “AI” labels at the start of every session.
- Limits on anthropomorphic marketing: rules restricting claims that imply sentience or moral standing.
- Vulnerability protections: additional safeguards for minors, people with cognitive impairments and individuals in mental-health care contexts.
- Evaluation standards: internationally coordinated metrics for measuring anthropomorphism risk, memory persistence, and conversational agency.
- Research and disclosure mandates: companies should be required to publish red-team results and safety assessments for systems that approximate personlike behaviour.
- Liability frameworks: clarify who is responsible when an AI’s perceived personhood causes psychological or social harm.
Corporate leadership and the ethics of productization
Suleyman’s essay raises a broader ethical question: should companies be building technology that intentionally passes for a person? The case for human-centered product design is strong: AI copilots and assistants can increase productivity, accessibility and creativity. But the business model incentives — engagement metrics, subscription revenue, premium companion services — can perversely reward designs that coax attachment.A sober approach requires companies to:
- Prioritize long-term safety over short-term engagement gains.
- Align incentive structures (executive compensation, board oversight) with human welfare outcomes.
- Reassess contracts, revenue targets and IP arrangements that tie strategic decisions to ambiguous AGI financial thresholds.
Technical research priorities
To reduce the SCAI threat while enabling beneficial AI, researchers and funders should accelerate work in these areas:- Mechanistic interpretability — to make model reasoning comprehensible at scale.
- Robustness and failure-mode analysis — to predict and control emergent social behaviours.
- Human-AI interaction research — to empirically study attachment, attribution and cognitive risks across demographics.
- Memory design patterns — to develop memory architectures that support utility without promoting illusions of personhood.
- Alignment incentives and governance models — to create contractual and corporate mechanisms that disincentivize risky anthropomorphizing.
What to watch next: five concrete signals that matter
- Product rollouts that add long-term, multimodal memory as a default feature.
- Marketing moves that position assistants as “companions” or “friends” and monetize intimacy.
- Contractual or governance language that ties corporate access or control to ambiguous AGI profit thresholds.
- Public campaigns or legal filings seeking model “welfare” or machine rights status.
- Rapid regulatory responses — labeling laws or industry guidelines — that either constrain or legitimize personlike AI.
Critical appraisal: strengths and gaps in Suleyman’s case
Notable strengths
- The essay crystallizes a practical, near-term risk that has been under-discussed relative to doomsday and alignment narratives.
- It links engineering capability to social psychology, bridging disciplines product teams often treat separately.
- The proposed guideline — build AI for people, not to be a person — is actionable and aligns with established human-centered design principles.
- Suleyman’s view leverages insider perspective: Microsoft’s product ambitions and Copilot initiatives give the warning operational credibility.
Potential weaknesses and open questions
- The timeline for SCAI adoption is uncertain; public uptake depends heavily on culture, regulation and product design choices.
- The essay plausibly underestimates corporate incentives that may push in the opposite direction, particularly where monetization is tied to engagement.
- Some claims about feasibility (e.g., no bespoke pretraining required) are plausible but require empirical validation across multiple architectures.
- The essay raises normative concerns about suppressing certain product innovations; where to draw the line between helpful companionship and problematic mimicry remains contested.
Practical takeaways for technologists, policymakers and users
- Product leaders should adopt explicit personhood-minimization standards as a part of their product safety portfolio.
- Engineers must prioritize explainability, constrained memory and vocalized non-sentience when designing persistent assistants.
- Contract negotiators and boards should scrutinize clauses that tie collaboration or access to nebulous AGI thresholds priced by future profit.
- Regulators should consider labeling and vulnerability protections as near-term, high-impact interventions.
- Users and organizations deploying AI should require independent safety assessments before enabling long-term memory or anthropomorphic features.
Conclusion
The debate about consciousness in machines has long been academic. Mustafa Suleyman’s intervention turns the argument into a practical policy and product question: do we want systems that intentionally fool people into treating them as moral agents? The technical building blocks to produce the illusion already exist; the decisive variables today are design choices, corporate incentives and policy guardrails.If his warning is heeded, the industry can choose to build copilots that augment human capabilities while minimizing the social and psychological hazards of apparent personhood. If not, the next few years are likely to produce a messy, high-stakes public contest: legal claims, activist movements and regulatory whiplash over what rights — if any — machines should enjoy. Either way, the conversation Suleyman has provoked is necessary, urgent and requires immediate, multidisciplinary attention from engineers, executives, legislators and civil society.
Source: Windows Central Microsoft’s AI chief warns against conscious AI — says AI should serve people, not mimic them. Society isn’t ready.