Charles Lamanna’s blunt framing — “six months, everything changes; six years, the new normal” — crystallizes a tension that has been quietly building inside Microsoft and across enterprise IT: generative AI is no longer content to be an assistant. It wants to act, decide, and execute. That shift...
Microsoft’s seasonal stunt with Copilot — a 12‑day “Eggnog Mico” series that dresses the new Mico avatar in holiday cheer and delivers daily AI‑generated pep talks, movie suggestions and family‑friendly micro‑experiences — is small in spectacle but large in signal: it crystallizes how platform...
A wrongful‑death lawsuit filed this month accuses OpenAI and Microsoft of enabling conversations with ChatGPT that allegedly reinforced paranoid delusions and contributed to a murder‑suicide in Connecticut, thrusting AI safety, product liability and market risk into an urgent, high‑stakes public...
Nearly one in three American teenagers now reports interacting with AI chatbots every day, a seismic shift in adolescent digital behavior that widens educational opportunities while amplifying urgent concerns about safety, mental health, privacy, and the adequacy of corporate and regulatory...
In a case that has jolted both the AI safety debate and markets that trade on it, a wrongful‑death lawsuit filed in December 2025 alleges that OpenAI’s ChatGPT reinforced a user’s paranoid delusions and materially contributed to a fatal attack on his mother. The complaint — part of a growing...
Toyota Leasing Thailand’s security team turned to Microsoft Security Copilot to protect customer data and preserve trust, embedding the AI assistant into a Microsoft security stack (Defender, Entra, Purview) to accelerate phishing triage, reduce analyst toil, and deliver leadership-ready...
Microsoft’s new e-book argues that stitching together dozens of point solutions leaves security teams slower, dirties telemetry, and blocks AI from delivering on its promise — and the company is backing that argument with a coordinated product push that ties Microsoft Defender, Microsoft...
Researchers at the University of Cambridge, working with colleagues from Google DeepMind, have published what they call the first psychometrically validated framework to measure and shape the “personality” of large language models (LLMs), showing that modern instruction‑tuned chatbots not only...
Researchers at the University of Cambridge, working with colleagues at Google DeepMind, have produced a psychometric toolkit that treats modern chatbots like test subjects: they administered adapted Big Five personality inventories to 18 large language models (LLMs), validated those measurements...
The conversations at Microsoft Security Summit Days make one thing unmistakably clear: future-proofing enterprise security is no longer a checklist—it's a strategic operating model that must knit people, data, identity, tooling, and governance into a single, resilient fabric. Microsoft’s...
As id Software’s studio in Richardson, Texas, voted to form a wall-to-wall union this week, Microsoft faces a new chapter in corporate-labor relations that will reshape how its studios negotiate working conditions, remote work, AI protections, and job security across a sprawling portfolio of...
Microsoft’s consumer AI chief Mustafa Suleyman has publicly pledged that the company will stop developing an advanced AI system if it ever “has the potential to run away from us,” a dramatic repositioning that arrives as Microsoft expands its own frontier-model program, reshapes its relationship...
Microsoft’s consumer-AI chief Mustafa Suleyman publicly vowed this week that Microsoft would stop pursuing advanced AI development if a system posed a genuine threat to humanity — a striking pledge that highlights both the company’s new strategic posture and the messy trade-offs at the heart of...
John Lambert’s argument to “change the physics of cyber defense” is both a wake‑up call and a pragmatic roadmap: represent your environment as a graph, harden the terrain, invest in expert defenders and collaboration, and put modern AI and high‑fidelity telemetry to work so defenders regain the...
Veza’s new AI Agent Security product arrives at a moment when enterprises are rapidly delegating more authority to autonomous software — and with that delegation comes a new set of identity, access, and governance challenges that traditional IAM wasn’t built to handle.
Background
Veza, an...
Veza’s new AI Agent Security productcodifies a practical — and urgently needed — approach to securing agentic AI by treating AI agents as first-class identities, offering unified discovery, access governance, and least-privilege controls across major cloud and model platforms. Background
Agentic...
Anthropic chief scientist Jared Kaplan has warned that humanity faces “the biggest decision yet”: whether to allow advanced AI systems to train their own successors — a step he says could arrive between 2027 and 2030 and usher in either a beneficial “intelligence explosion” or a loss of human...
The Australian government’s newly published National AI Plan has prompted sharp public commentary from leading academics — notably UNSW AI Institute Director Dr Sue Keay, who welcomed the plan’s framework but warned that words without capital investment and sovereign compute will leave Australia...
The idea that today’s generative models—ChatGPT-style systems, Codex agents, and the latest multimodal behemoths—are a single step away from runaway, self-improving superintelligence is seductive, but wrongheaded in its simplest form: we are closer than most people realize to AI systems that can...
Apple’s AI leadership just got a high‑stakes reset: Amar Subramanya, a longtime researcher‑engineer who has moved between Google and Microsoft, has been named Apple’s new vice president of AI and will take charge of Apple Foundation Models, machine‑learning research, and AI safety and...