Obsidian Says No to Generative AI: Why Human Craft Still Rules RPGs

  • Thread Author
Obsidian Entertainment’s short, clear answer to whether it uses generative AI — “we haven’t been using it at all” — lands like a deliberate counterpoint to an industry that increasingly treats AI as an inevitable productivity lever. The comment, given in a recent interview with Game File and confirmed by studio PR, isn’t a rhetorical flourish: it’s a public stance by a prolific, Xbox‑owned RPG developer that chose to keep its storytelling and content pipelines human‑driven even as competitors and publishers rush to fold generative systems into asset creation, QA, and prototyping workflows.

Four people study a glowing blue holographic blueprint, with a brain hologram on the wall and a “NO AI SHORTCUTS” sign.Background​

Obsidian Entertainment has been one of the busiest mid‑sized AAA studios through the 2020s. In 2025 alone the studio shipped a high‑profile fantasy RPG (Avowed), an early access survival sequel (Grounded 2), and a big sci‑fi follow‑up (The Outer Worlds 2) — a release cadence that would be noteworthy for a much larger team. Those titles launched across PC and console platforms during a year when the broader industry has been debating where — and whether — to deploy generative AI in creative workflows. At the same time, a rising number of major publishers and platform stakeholders have announced aggressive AI programs: partnerships to co‑develop generative tools, targets to automate portions of QA and debugging, and internal mandates encouraging or requiring staff to adopt AI tools in daily workflows. These moves have produced both industry optimism about faster iteration and fierce pushback from creatives and communities worried about craft, jobs, and quality control. This tension — human craftsmanship vs. machine acceleration — frames the practical and ethical debate that Obsidian’s public refusal to use generative AI brings into sharper relief.

What Obsidian actually said​

When asked directly about generative AI in its writing process, Obsidian veterans Josh Sawyer and Leonard Boyarsky replied that AI “isn’t something we’ve used” and “we haven’t been using it at all,” respectively. Boyarsky also walked back past, speculative comments about AI’s potential, calling his earlier enthusiasm a “thought experiment” that he’d now temper as impractical and “very unwieldy.” That frankness from senior creatives is notable given Obsidian’s output and the studio’s place inside Microsoft’s Xbox Game Studios portfolio. Why it matters: Obsidian is not a tiny indie — it’s a roughly few‑hundred person studio that, over the past several years, has grown significantly under Microsoft ownership and still chooses to keep writing and many creative processes human‑led. The choice is both artistic and operational: Obsidian’s design philosophy places narrative craft and authored systems at the center of its player experience, and its leaders clearly view generative AI as an uncertain tool for those disciplines.

The industry context: who’s leaning into AI — and how​

Generative AI adoption is not hypothetical for the games industry; it’s active and accelerating. Three high‑visibility trends illustrate the scope:
  • Publisher‑level partnerships and tool integration: Electronic Arts announced a strategic partnership with Stability AI to co‑develop generative models and workflows intended to help artists and developers prototype faster and generate assets at scale. That deal is explicit about using AI as a creative assistant — but it also signals a mainstreaming of generative pipelines inside large studios.
  • QA and automation targets: Square Enix publicly set an ambitious target to automate a large share of QA and debugging tasks with AI — a move projected to reach a 70% automation goal by the end of 2027. That target, if pursued at scale, has deep implications for QA headcounts and how studios validate game stability.
  • Asset and marketing backlash: Activision’s use of generative AI for promotional art and some in‑game assets has triggered community outcry when output quality slipped (the now‑ubiquitous “AI slop” problem), leading to frank admissions on storefront pages about AI usage and renewed debates about disclosure and copyright. These controversies show how poor governance or insufficient human oversight can produce reputational risk.
Taken together, these trends show a bifurcated adoption model: technical and production roles are integrating AI tools to accelerate iterative tasks, while the highest‑value creative decisions remain hotly contested and, in many cases, kept human‑centric.

Why Obsidian’s stance is strategically defensible​

There are concrete strengths and strategic reasons behind Obsidian’s explicit non‑use of generative AI.

1) Narrative integrity and authored choice​

Obsidian’s games are defined by authored narrative arcs, carefully engineered quests, and branching dialogues where tone, foreshadowing, and deliberate misdirection matter. Generative models today can produce text at scale, but they struggle consistently with long‑form narrative cohesion, tone continuity, and the kinds of deliberate paradoxes and moral ambiguity veteran RPG writers craft. Keeping writing wholly human preserves the studio’s editorial control and reduces the risk of incoherent emergent dialogue that breaks immersion.

2) Testability and QA clarity​

Boyarsky’s observation that an AI‑driven, real‑time conversational system “quickly gets very unwieldy” is a practical reality for production teams. Testing deterministic, authored dialogue trees is relatively straightforward; testing a near‑infinite generative dialogue space is not. Obsidian’s comment about not knowing how to test a fully AI‑driven conversational system highlights a core engineering and QA problem: generative outputs create combinatorial explosion in validation scenarios, which undermines predictable quality.

3) Brand trust and community expectations​

Obsidian’s player base prizes carefully written RPG content, and the studio’s reputation for quality storytelling is a competitive advantage. Publicly rejecting AI, at least for writing, positions Obsidian as safeguarding a creative value that many players explicitly want preserved. Given recent stampedes toward “AI slop” in other franchises and the ensuing backlash, Obsidian’s decision protects its brand identity and signals fidelity to craft.

The risks Obsidian is avoiding — and the costs it accepts​

Not using generative AI is an explicit trade‑off that reduces certain operational risks while increasing others.

Avoided risks​

  • Reduced exposure to low‑quality, unvetted AI outputs being shipped to players.
  • Lower legal and IP ambiguity around machine‑sourced content and copyright eligibility.
  • Less likely to trigger workforce disruption narratives or internal morale issues tied to automation fears.

Accepted costs​

  • Slower prototyping for art and world design compared to teams that use text‑to‑image or image‑to‑3D tools.
  • Potentially higher headcount costs for repetitive asset creation, iteration, and QA.
  • Operational constraints when competitors can rapidly produce vertical content for live service features or user‑facing events using AI toolchains.
Those costs are real. EA’s publicized partnership with Stability AI, for example, is framed around empowering creatives, but it will also materially speed iteration and reduce time‑to‑prototype for high‑volume content — an advantage in the cutthroat live‑service economy. Choosing not to use those tools is therefore a strategic sacrifice of certain production efficiencies in favor of artistic control.

Labor, legality, and governance: the wider implications​

Generative AI adoption is reshaping the backstage of game development in three structural ways.

1) Job redefinition and QA automation​

Square Enix’s automation target exemplifies how AI can threaten entry‑level and quality roles. QA historically functions as both bug‑finder and an early career path into design; automating a large share of those jobs risks eroding talent pipelines and institutional knowledge unless companies commit to reskilling programs. The industry debate is not just economic — it is also about long‑term workforce development.

2) Copyright and IP complexity​

Legal authorities have begun to scrutinize whether raw AI outputs are eligible for copyright protection. When studios rely on generative models trained on third‑party datasets, questions arise about chain‑of‑title for assets and potential infringement. Activision’s admitted use of AI in Call of Duty reopened those conversations and raised practical questions about disclosure and rights management for AI‑assisted content. Robust governance — including provenance tracking and human‑in‑the‑loop validation — is now table stakes for publishers using generative systems.

3) Product quality and player trust​

The “AI slop” backlash shows how quickly player trust can erode when companies ship cheaply validated AI art or content. The short‑term cost savings from using generative tools without proper human curation can become long‑term reputational debt. Obsidian’s human‑first stance hedges against that particular market risk.

A pragmatic middle path: governance, tooling, and human oversight​

Obsidian’s decision is one valid approach. But it’s not the only responsible path forward. For studios tempted by generative AI, a balanced governance model reduces downside and lets teams capture practical benefits without ceding creative control.
  • Establish clear policy boundaries: define where generative AI is permitted (prototyping, mockups, internal docs) and where it is forbidden (final narrative, credited art assets) to preserve authorial integrity.
  • Require human‑in‑the‑loop validation: every AI‑assisted asset should pass an author sign‑off step to ensure quality and legal compliance.
  • Implement provenance and versioning: record model versions, training sources (as available), and prompt logs to support audits and copyright claims.
  • Invest in reskilling: equip QA and junior artists with AI literacy so they can leverage tools safely while avoiding displacement.
  • Pilot programs with transparent disclosure: run small, well‑scoped pilots and communicate results openly to the development community and players.
These measures reconcile the efficiency gains of generative AI with the craft and trust that audiences expect from narrative studios.

What Obsidian’s stance means for Xbox and publisher strategy​

Obsidian sits under Microsoft’s Xbox umbrella, an ecosystem that — like many large tech companies — is actively investing in AI across products and services. A single studio’s public abstention doesn’t imply corporate contradiction; rather, it highlights a nuanced reality: platform owners can enable and provide AI tooling while still granting studios latitude over adoption. That choice architecture should be celebrated: it creates room for studios to make editorial and operational decisions aligned with their creative identity.
However, the tension is real. Microsoft’s broader AI investments could create pressure points: shared infrastructure, cross‑studio expectations, or efficiency targets might eventually nudge teams toward tool adoption. Obsidian’s transparency about its stance is therefore both a defensive posture and a public bet on the value of human‑led storytelling in a market increasingly tempted by automated scale.

Final analysis: what gamers and studios should watch next​

Obsidian’s “no AI shortcuts” message is a useful, clarifying stance in a murky debate. It defends human authorship, prioritizes quality control, and protects player trust — all defensible positions for a narrative studio whose reputation rests on carefully authored experiences. But it is not cost‑free. As competitive pressures mount in the live‑service and content‑heavy parts of the industry, studios that refuse even modest AI assistance risk slower iteration and higher per‑asset costs.
Where the industry goes from here will be shaped by three forces:
  • The pace of technical progress: If AI models advance to reliably produce high‑quality, testable narrative assets, economic incentives to adopt will grow.
  • Legal and regulatory clarity: Copyright rulings and industry standards around disclosure will influence whether publishers can safely rely on generative outputs.
  • Community and employee pushback: Player expectations and developer morale will constrain reckless adoption. Recent backlash against low‑quality AI assets shows the appetite for demanding higher governance.
Obsidian’s public stance is a reminder that choice matters. Studios can and must choose how to integrate new technologies, and those choices should reflect creative goals, ethical norms, and operational realism. For players who value authored storytelling, Obsidian’s human‑first approach is an encouraging signal. For studios navigating a volatile production landscape, the path forward will require rigorous governance, careful pilots, and an honest accounting of the costs and benefits of generative AI in games.

Obsidian’s simple line — “we haven’t been using it at all” — does more than deny a technology: it re‑centers a conversation about what players actually pay for when they buy a story‑driven game. In an age of tool‑driven scale, that reminder is valuable. The rest of the industry will answer in its own way, but Obsidian’s stance ensures at least one high‑profile developer keeps the craft of human storytelling front and center as the debate over generative AI in game development continues to evolve.
Source: Windows Central https://www.windowscentral.com/gaming/xbox/obsidian-entertainment-not-using-generative-ai-at-all/
 

Back
Top