AI First vs Authentic Craft: Krafton and Pocketpair Redefine Gaming's Future

  • Thread Author
Krafton’s pivot toward an “AI‑first” future and Pocketpair’s vow to ban generative AI in its new publishing arm have forced the games industry’s current fault lines into plain view: two very different answers to the same pressure—scale creative output, cut costs, and respond to players’ demands—each with deep technical, ethical, and business implications.

Split scene: a blue AI lab with servers on the left, and a warm artist drawing cute creature sketches on the right.Background​

The last three years have pushed artificial intelligence from an experimental tool used for upscaling textures and automating QA into an operational lever that publishers and platform holders are designing whole strategies around. Big publishers and service providers have publicly committed to agentic, automated pipelines; platform operators have begun requiring transparency about AI use; and indies are split between tooling adoption and promises of human‑first craft.
That tectonic shift matters more for games than for many other creative products because games are simultaneously code, art, narrative, online service, and community hub. Changes to asset production, localization, live content generation, or QA ripple across operations, costs, IP risk, and community trust. Evidence of that ripple shows up in two headline moves this month: Krafton’s “AI‑First” declaration and Pocketpair’s anti‑generative‑AI publishing rule. Krafton’s plan is built on massive compute and agentic automation; Pocketpair’s stance is a cultural and editorial bet on authenticity and human authorship.

Overview: two philosophies, one industry​

  • Krafton: scale via agentic AI and automation, rewire the company around AI workflows, invest in on‑prem GPU infrastructure, and reallocate human labor toward higher‑value creative work.
  • Pocketpair: explicitly refuse generative AI, Web3, and NFTs in the titles it will publish, positioning the studio and its publishing label as an authenticity brand in an era of increasing automation.
These moves are not rhetorical. Krafton has publicly described a large capital investment in GPU infrastructure and a corporate timetable to embed agentic systems across operations. Pocketpair’s publishing head has been equally blunt in interviews: if your shipped assets rely on generative AI, they won’t take the deal. Both positions are credible responses to real market forces—but their divergences expose practical tradeoffs and risks that every studio, publisher, and platform will need to manage.

Krafton’s “AI‑First” transformation: what they announced​

Krafton has declared an “AI‑First” strategic pivot that repositions AI from a tool to a core operating principle across product, operations, and HR. The company’s public roadmap emphasizes three pillars: cultural adoption (training, hackathons, and internal learning hubs), organizational innovation (specialized R&D teams and new management models), and reinvestment of time saved through automation into new games and IP development.
Key, verifiable specifics Krafton published:
  • A planned investment of roughly 1,000억 원 (KRW 100 billion) to build a dedicated GPU cluster and AI infrastructure. This investment will underpin internal AI workflows, R&D, and in‑game AI services. The company describes the target hardware as NVIDIA B300–class infrastructure to support agentic and multi‑stage reasoning workloads. Krafton’s announcement outlines a delivery target for an enterprise AI platform and management systems by the second half of 2026.
  • A commitment to increase internal AI tooling budgets dramatically—Krafton said it will allocate a recurring annual budget (reported as KRW 30 billion per year starting in 2026) to give employees access, training, and running costs for AI tools. This budget is explicitly framed as a worker‑facing investment so staff can “use and experiment” with AI directly.
  • The company intends to adopt Agentic AI—systems that can set goals, plan across multiple steps, and act autonomously on repetitive or analytical tasks—rather than only using generative models as one‑off content tools. Krafton explicitly links agentic automation to a reorganization of roles and the expansion of organizational span of control.
Those announcements were published by Krafton itself and subsequently covered by independent outlets; the vendor hardware referenced—NVIDIA DGX B300 / HGX B300 infrastructure and the Blackwell‑class GPUs it contains—is a current generation of enterprise AI hardware designed for high‑throughput training and inference workloads. NVIDIA documentation and vendor pages confirm the B300 family’s purpose as datacenter‑grade infrastructure for AI R&D and inference.

Why Krafton is betting on agentic systems​

Krafton frames the move as both competitive necessity and a productivity play. The company argues that agentic automation will:
  • Shorten iteration loops for design and prototyping,
  • Automate repetitive QA and build tasks,
  • Provide new in‑game AI services that can scale personalized experiences,
  • Free creative staff to focus on higher‑value systems design and storytelling.
Those are practical benefits: generative and procedural tooling already quickens concept iteration; large compute systems make it feasible to run massive synthetic QA and balance sweeps; and agentic orchestration can coordinate complex pipelines that create, test, and assemble content at scale. The business case is straightforward if the systems work as promised.

Risks and open questions​

Krafton’s plan is technically ambitious and organizationally disruptive. The announcement itself acknowledges broad change, yet it leaves several critical questions open:
  • Workforce impact: Krafton promises reinvestment of time saved into creative work, but history shows automation often compresses roles before new ones emerge. Without explicit job‑security measures, headcount reductions (particularly among contractors and early‑career staff) remain a realistic risk.
  • IP and training data: Agentic systems that generate art, code, or behavior may use models trained on public or licensed data. If training datasets include copyrighted game assets or third‑party art, legal risks could follow. The industry has already seen legal and reputational consequences when training data provenance is unclear.
  • Quality and homogeneity: Large‑scale automation prioritizes throughput; without tight human curation and editorial standards, the market risks an increase in derivative, “AI‑slop” titles—low‑effort but high‑volume products that dilute discoverability and player trust.
  • Operational cost and sustainability: Running Blackwell‑class clusters at scale is expensive and power‑intensive. Capital outlays are only the beginning; operational power, real estate, cooling, and skilled datacenter ops are recurring costs that must be justified by business outcomes. NVIDIA’s DGX B300 and HGX B300 systems are engineered for high density and performance, but they also consume multi‑kilowatt power envelopes and require specialized deployment.
All of these tradeoffs mean Krafton’s strategy will succeed only if it pairs compute investments with rigorous governance: explicit policies about what AI may produce, provenance and audit trails for assets, retraining programs for staff, and transparent communication with external communities and partners.

Pocketpair and Palworld: betting on authenticity​

On the opposite end of the spectrum, Pocketpair—the independent studio behind Palworld—has publicly launched a publishing division and taken a clear editorial stance: it will not publish games that use generative AI, Web3, or NFTs. The division’s head, John Buckley, has been quoted saying the label “doesn’t believe in” generative AI for shipped assets and that Pocketpair wants to champion human‑made craft. The new publishing arm reportedly received more than 150 pitches in its first week, illustrating developer interest despite the restriction.

Why Pocketpair is pushing back​

Pocketpair’s stance rests on several interlocking claims:
  • Authenticity sells: the studio expects an emergent “authenticity market” where players pay a premium for clearly human‑made, provenance‑verified works.
  • Reputation risk avoidance: Palworld itself faced accusations of using AI for art and machine translation; the company says those accusations were unfounded and that the controversy made them cautious about the broader risks of AI claims and misattribution.
  • Editorial curation: by refusing AI‑generated assets, Pocketpair aims to differentiate its publishing label as a curator of craftsmanship, which can be an attractive value proposition for developers who fear being drowned by low‑effort AI titles on storefronts.

Practical and legal context​

Pocketpair’s position is notable for how explicit it is: most publishers avoid blanket bans because generative tooling increasingly permeates localization, code scaffolding, and prototyping workflows. But Pocketpair’s public line appeals to a segment of players and creators who worry about job displacement, the dilution of IP norms, and the loss of distinctive artistic voice.
That said, Pocketpair remains exposed to the same economic pressures as other studios. Its hit Palworld has also been the subject of a patent lawsuit filed by Nintendo and The Pokémon Company in September 2024, alleging infringement of patent rights related to certain game mechanics. That legal fight is a reminder that human‑authored works can still run into IP disputes, and refusing AI does not eliminate legal complexity.

Platform responses and the disclosure imperative​

The industry’s middle ground is shifting toward transparency requirements. Valve/Steam updated its review and disclosure procedures around AI use in games: developers are now expected to disclose when AI has been used to generate assets, and live‑generated content requires additional safeguards and disclosure of guardrails to prevent illegal or infringing outputs. Storefront transparency is becoming a basic trust mechanism for players deciding what to buy and for publishers who want to demonstrate provenance.
This combination of publisher policies and platform disclosure means the commercial effect of generative tooling is now shaped by three factors:
  • Platform moderation and labeling,
  • Publisher editorial standards (like Pocketpair’s ban),
  • Developer transparency and asset provenance management.
Together these three layers will determine who benefits from AI tooling and who faces reputational or commercial consequences.

Technical reality: agentic AI vs. generative tooling​

The terms “agentic AI” and “generative AI” are often conflated but mean different things in production contexts:
  • Generative AI (images, text, audio) is used primarily for content creation—concept art, dialogue scaffolding, music beds, or even pitch prototypes.
  • Agentic AI layers planning, goal setting, and multi‑step orchestration over generative primitives. An agentic system doesn’t just create an image; it might iterate a concept, validate it against design rules, queue it for QA, and insert it into a build pipeline automatically.
Krafton’s roadmap emphasizes agentic capabilities—systems that can act on behalf of teams to automate workflows. Those systems require orchestration, observability, and rigorous guardrails to avoid cascading errors or copyright violations. The underlying compute they reference—NVIDIA DGX / HGX B300 modules and Blackwell‑class GPUs—are explicitly tailored to those high‑throughput, multi‑stage tasks. NVIDIA’s B300 class is intended for enterprise AI reasoning workloads and is already being marketed and deployed for similar large‑scale tasks.

The human cost and labor dynamics​

Automation always reframes labor economics. Three labor outcomes are plausible and deserve explicit planning:
  • Augmentation and reskilling: staff become higher‑value creatives and system designers; companies invest in retraining for AI oversight and agent governance.
  • Role compression: repetitive and mid‑level tasks (iterative concepting, basic QA, localization first passes) become consolidated or outsourced to automated pipelines, reducing the total number of roles.
  • New roles and specialization: ops, model‑ops, prompt engineering, AI ethics, and provenance auditing become standard teams inside studios and publishers.
Krafton has said it will allocate funds to train staff and give them access to AI tools; however, the company’s plan to reorganize managerial spans and adopt AI‑centric norms means studio heads, HR, and unions (where present) will need clear agreement on job transitions, severance, and retraining programs. Without those, automation can erode morale and community trust.
Pocketpair’s stance attempts to protect creative roles by signaling a market for human‑made work. That position may preserve roles at smaller studios or those selling to a “craft‑first” audience, but it is not immune to economic pressure from competitors that achieve massive cost reductions or scale through automation.

Business strategy implications​

  • For AAA publishers: heavy investment in agentic systems can reduce time‑to‑market for large IP, enable massive live services, and unlock new personalization features. But success requires governance and legal clarity around training data and output provenance.
  • For indies & boutique publishers: an authenticity positioning (Pocketpair’s approach) can become a differentiator if players value provenance—and if storefronts and discovery mechanisms allow niche curation to be financially viable.
  • For platforms: disclosure and review policies will be essential to maintain buyer trust and to prevent flooding with low‑quality, AI‑generated titles. Valve’s policy adjustments are an early example of platform governance in action.

Recommended guardrails for studios and publishers​

  • Asset provenance: require and preserve source files and layered originals for all shipped art, audio, and dialogue. This helps prove human authorship where required.
  • Transparent policies: publish a clear, machine‑readable statement on AI usage for each released title. Platforms are moving that way; being proactive avoids forced disclosure narratives.
  • Human‑in‑the‑loop: mandate human sign‑off on critical creative outputs—character designs, iconic assets, and final narrative beats—so that generative outputs are treated as drafts rather than final works.
  • Employee transition plans: if automation is adopted, commit to retraining budgets, redeployment programs, and clear timelines for role changes.
  • Legal and dataset due diligence: maintain auditable logs of model training sources, licensing agreements for third‑party models, and rights clearance for any data used to fine‑tune in‑house models.
These steps are practical and achievable; the studios that adopt them early will reduce legal risk and preserve player and creator trust.

Conclusion: an industry at a crossroads​

Krafton and Pocketpair are not simply taking different technical approaches—they’re making different bets about where value in games will come from over the next decade. Krafton is betting that scale, orchestration, and agentic automation will unlock new kinds of creativity and operational efficiency, backed by heavy compute investments and new organizational structures. Pocketpair is betting that authenticity, human authorship, and clear provenance will become a market advantage as AI‑generated content proliferates.
Both bets are rational responses to the same forces. Neither is guaranteed. The likely industry outcome is a segmented ecosystem: large publishers will pursue agentic, compute‑heavy automation for scale and live services, while a substantial market niche will reward artisanal, human‑first titles that emphasize provenance and craft. Platforms, legal frameworks, and player expectations will determine how wide each lane becomes.
The practical upshot for developers and players is immediate: expect more explicit disclosure requirements, more corporate investment in AI infrastructure and governance, and sharper editorial differentiation among publishers. Studios that plan for governance, transparency, and worker transition will fare better than those that treat AI as a plug‑and‑play cost saver. And players should expect clearer signals about which games use AI and how.
(Technical claims in this article—the Krafton investment figures and timelines, Pocketpair’s publishing policy statements, NVIDIA B300 specifications, Valve/Steam disclosure changes, and Nintendo’s legal action—are drawn from company announcements and reporting by major outlets. Some dollar‑value conversions and long‑term forecasts are estimates and are flagged where the original statements were presented in local currency or as company goals rather than guaranteed outcomes.)

Source: Windows Central Gaming’s AI crossroads: PUBG maker leans in, Palworld pushes back. Here's what's going on.
 

Back
Top