• Thread Author
When Ilya Sutskever, one of OpenAI’s founding researchers and long-time Chief Scientist, stepped away from the company in mid-2024, there was intense speculation about who would emerge to steer the organization’s research agenda and safeguard its claim to the artificial intelligence crown. In the months since, OpenAI’s leadership core has become anchored by two figures born in the early 1990s—Mark Chen and Jakub Pachocki—each with distinct expertise and complementary responsibilities but a shared vision for where AI is heading and what it will take to get there.

Scientists analyze holographic data displays on digital tablets in a high-tech laboratory.The Rise of a New Generation: Mark Chen and Jakub Pachocki​

Both Chen and Pachocki have reputations that reach far beyond OpenAI’s walls. Their ascension coincides with a period of industry-wide talent wars, spiraling infrastructure needs, and a high-stakes competition to define the future of foundational model research and deployment.

Mark Chen: Architect of Teams and Models​

Mark Chen, OpenAI’s Chief Research Officer, is widely credited with driving pivotal advances in generative models—from DALL-E’s creative visual synthesis to the coding-focused Codex, and from the scalable architectures of GPT-3 to adding image recognition to GPT-4. Chen’s technical fingerprints are on many of the projects that underpin OpenAI’s current competitive edge.
His personal story carries some Silicon Valley legend: In the spring of 2024, Meta CEO Mark Zuckerberg, inspired after a conversation with Chen about the importance of investing in AI talent, reportedly offered him up to $1 billion to join Meta’s AI push. Chen declined, describing his role at OpenAI as deeply meaningful and a source of ongoing satisfaction—a stance confirmed by unrelated press reports and Altman’s public pronouncements. While the exact dollar figures reported remain unverified (and should be treated with skepticism), the substance—Zuckerberg targeting Chen as a keystone recruit and being rebuffed—has been corroborated by multiple independent sources in the tech press and OpenAI’s own statements.
Prior to joining OpenAI, Chen earned a computer science degree from MIT and developed machine learning models for quantitative trading at Jane Street Capital and Integral Technology. His background in algorithmic thinking and optimization is mirrored in his leadership at OpenAI, where he is known for assembling cross-disciplinary, competition-hardened research teams. As coach of the US Computer Olympiad, Chen has also fostered a strategic focus at OpenAI on international mathematics and programming competitions—a pipeline for scouting elite research talent.

Jakub Pachocki: From Olympiad Prodigy to Chief Scientist​

Jakub Pachocki, who succeeded Sutskever as Chief Scientist, joined OpenAI in its early years and rapidly distinguished himself—not only as a technical lead, but as a researcher with an uncanny ability to solve core theoretical problems. With a doctorate from Carnegie Mellon—completed in just three years—and postdoctoral work at Harvard, Pachocki is as at home in academia as in industry, but he is perhaps best known on the global stage of competitive programming.
He excelled at the International Olympiad in Informatics, the International Collegiate Programming Contest (ICPC), and was champion of Google Code Jam—a track record that places him among the world’s top algorithmic thinkers of his generation. At OpenAI, Pachocki has led landmark projects including the Dota reinforcement learning program, headed both the inference and deep learning science teams, and oversaw research for models including GPT-4 and the novel o-series.
When Pachocki was promoted to Chief Scientist in 2024 after Sutskever’s departure, Altman publicly praised him as “one of the most outstanding minds of our generation,” a view echoed by AI insiders.

Defining Roles and Mutual Influence​

The division of labor between Chen and Pachocki is clear but fluid—Chen builds and manages research teams, cultivates talent, and frames project tactics; Pachocki sets the roadmap for core research, guiding OpenAI’s long-term bet on scalable intelligence and AGI-like reasoning.
Yet, both take a pragmatic approach to leadership. They describe themselves as researchers first and managers second, with each freely intervening anywhere in the stack when technical rigor is needed. The trust and intellectual shorthand they share—likely forged during their years on international programming circuit and at the helm of high-pressure teams—allows for rapid, effective responses to unforeseen challenges.

How the Leadership Shift Reshapes OpenAI’s Research​

The exit of Ilya Sutskever, and with it the disbanding of the much-debated Superalignment Team, was a seismic event. Sutskever long argued that aligning superintelligent AI—governing its safety and behavior—should be the company’s defining technical mission. At one point, he secured up to one-fifth of OpenAI’s resources for this work. Under Chen and Pachocki, alignment research has been rationalized and distributed throughout OpenAI’s core teams rather than centralized as a separate silo. Their view: safety and utility must be inseparable, not sequential; as Pachocki put it, “the model must work as expected,” and if it doesn’t, no separate safety process can compensate.
This pragmatic integrationist stance has raised both praise and concern. On the upside, it promises to accelerate the translation of pure research into robust, user-facing products—one of the primary criticisms levied at prior attempts to “quarantine” alignment work. On the risk side, critics argue that the dissolution of a dedicated alignment group may reduce internal checks and slow down nuanced, long-term safety research that is not immediately productizable.

Competitions as Research Laboratories​

One distinguishing feature of this new leadership is how seriously they treat international competitions—not just as recruiting grounds, but as crucibles for testing general intelligence. Under their leadership, OpenAI models routinely compete in math and programming contests, recently earning a gold medal in the International Mathematical Olympiad (IMO) and taking second place in the global AtCoder programming challenge, bested only by the famed human competitor “Psyho”—himself a former OpenAI employee and friend of Pachocki.
These races serve a dual purpose. First, they supply OpenAI’s models with real, adversarial benchmarks and exposure to problems demanding deep reasoning and creative insight. Second, each defeat (as with AtCoder) uncovers blind spots and training gaps, providing invaluable data for further model improvement. As Pachocki assessed, “Programming and mathematics are…about creativity, coming up with novel ideas, and connecting ideas from different places.” The philosophy is to pit models directly against the best human problem-solvers in “open world” conditions.

Technical Progress: From Scaling Laws to Autonomous Research​

Chen and Pachocki have consistently argued that the much-discussed “scaling laws” fueling LLM progress—bigger models with more data and compute yielding better results—have not yet hit a wall, even for modalities requiring complex reasoning. Their bet: with the right data curation and architectural tweaks, language models will increasingly approach human-like skill in abstract problem-solving, logical synthesis, and autonomous research.
Recent releases bear this out, at least in headline benchmarks. The new GPT-4o and anticipated GPT-5 models have improved across reputable academic and industry challenges, including mathematics, scientific reasoning, and intermediate code generation. Performance on “Humanity’s Last Exam,” FrontierMath, and DSBench now consistently matches or surpasses average graduate-level candidates on expert tasks.
Yet, the technical directors stress that “bigger is better” only insofar as training data and task framing progress in parallel. In recent interviews and OpenAI blog posts, they’ve flagged the ongoing challenge of enabling LLMs to “connect knowledge” and to self-improve in narrow, open-ended domains—tasks that still evade even the most powerful models.
They see “autonomous time”—the interval in which an AI can research and refine answers without human guidance—as a critical next frontier.

The Changing Infrastructure: Compute Wars and the Need for Scale​

Under the stewardship of Chen and Pachocki, OpenAI’s aggression in scaling its computational backbone has grown more pronounced. No longer relying solely on Microsoft’s Azure infrastructure, OpenAI in the past year brokered landmark deals to bring Google Cloud, Oracle, and AI specialist CoreWeave into the fold. This “multi-cloud” strategy is not just about redundancy, but a direct response to surging GPU scarcity and volatile costs. Figures from Microsoft and Google earnings calls show the scale: Azure’s annual revenue from AI assistance alone now exceeds $75 billion, while Google Cloud posted over $13.6 billion in Q2 2025 and continues to pour billions into bespoke AI data center buildouts.
The technical rationale is straightforward: as LLMs grow in size and complexity (and as OpenAI’s ambitions bleed into new products like multimodal agents and tool-using AIs), single-supplier cloud deals have become a vulnerability rather than a strength. The so-called “Stargate” multi-partner initiative, involving SoftBank, Oracle, and others, aims to future-proof OpenAI’s pursuit of a scalable, low-latency AI backbone—though not without raising concerns about data sovereignty, operational complexity, and “frenemy” dynamics among cloud hyperscalers.

Strengths of the Chen–Pachocki Era​

  • Research-Driven, Product-Aware: Both leaders maintain ties to academic-style inquiry yet relentlessly drive toward deployable models, bridging a gap that often stymies AI labs.
  • Talent Magnetism: The public identity and personal networks of both figures are major attractors for top-tier researchers and competitive programmers.
  • Global Benchmarking: Regular participation (and competitive success) in global programming and mathematics competitions ensures that OpenAI’s models remain tested against the best.
  • Infrastructure Agility: Shifting to a multi-cloud, specialized compute model has let OpenAI sidestep supply chain bottlenecks and power further innovation.

Potential Risks and Weaknesses​

  • Safety and Alignment Concerns: The integration of superalignment research into broader teams, while efficient, risks deprioritizing the “hard problem” of AI values, intent, and control. Dissenting AI safety experts and some OpenAI insiders have warned that the winding down of a stand-alone Superalignment Team may reduce visibility and rigor in this pivotal area.
  • Overextension and Burnout: The same model of “all hands on deck” technical intervention, while a strength in high-performance teams, can lead to leadership bottlenecks, burnout risk, or decision gridlock when faced with scaling organization size.
  • Multi-Cloud Complexity: Juggling infrastructure deals with Microsoft, Google, Oracle, and CoreWeave exposes OpenAI to integration headaches, security risks, and regulatory scrutiny.
  • Talent Wars Intensify: As competitors like Meta, Google DeepMind, and Anthropic redouble their own recruitment and research pushes (often targeting OpenAI staff), instability in OpenAI's leadership pipeline could echo past disruptions.
  • Market and Political Risk: OpenAI’s increasingly public posture and tightly-coupled relationships with Big Tech draw it into broader regulatory, antitrust, and national-security debates—issues that sometimes exceed purely technical challenges.

Culture and Collaboration: “Mom and Dad” Leadership​

A tongue-in-cheek meme recently circulating in tech circles dubbed the Chen–Pachocki partnership as “so strong even ChatGPT calls them mom and dad.” Beyond the joke lies a real cultural observation: OpenAI’s research ethos has become more familial, team-driven, and resilient to individuals leaving than in the “cult of founder” days.
Altman’s visibility as a public figure, meanwhile, supplies air cover for the research core, enabling the technical leadership to hone focus even during turbulent periods—such as the anticipated rollout of GPT-5 and the continuing industry-scale “talent wars.” Netizens and insiders alike agree: with a new guard in place, OpenAI is no longer a monolith of a single visionary but an agile ship piloted by a close-knit, high-performing team.

The Road Ahead: Opportunities and Questions​

The convergence of advanced infrastructure, an elite research workforce, and a pragmatic integration of alignment and product teams positions OpenAI at the vanguard of practical AI deployment. As agentic AI becomes less about theory and more about transformative workplace, media, and creative applications, OpenAI’s dual pillars—Chen and Pachocki—are likely to remain at the center of global debate over both the promise and peril of general-purpose AI.
But there remain questions only the months ahead can answer. Will OpenAI maintain its pace of innovation without sacrificing safety? Can it balance the centrifugal pressure of multi-cloud partnerships with the centripetal force of a coherent research vision? And, most crucially for the world, will its models remain secure, fair, and reliable as their reach extends deeper into society?
What’s clear is that OpenAI’s core research is now held up by an unusually strong, well-matched partnership—one tasked with not only advancing the science, but also safeguarding the future of intelligence itself. As the battle for AGI heats up, all eyes are on the heirs of Sutskever’s legacy to see if they can deliver on what their generation of AI researchers has so boldly promised.

Source: 36Kr Two Post - 1990s Individuals Prop Up Core Research at OpenAI After Ilya
 

Back
Top