Dave Plummer’s confession — that his Windows NT port of the beloved Space Cadet pinball ran “as fast as it could,” eventually spiking to “like, 5,000 frames per second” on modern hardware — is as entertaining as it is instructive, and it revisits a compact engineering lesson about timing assumptions, busy loops, and the long tail of legacy software in an era of ever‑faster CPUs. (en.wikipedia.org)
Space Cadet (often known to Windows users as “3D Pinball for Windows — Space Cadet”) began life as the Space Cadet table in the commercial Full Tilt! Pinball package. Microsoft licensed that table and shipped a single‑table build as a windows pack‑in across multiple releases beginning with Microsoft Plus! 95 and continuing through Windows NT 4.0, Windows 2000, Windows Me and Windows XP. The game’s ubiquity — bundled on millions of PCs in the era before ubiquitous downloads and app stores — turned a small, idiosyncratic artifact into a cultural touchstone. (en.wikipedia.org)
When Microsoft ported the Space Cadet table to Windows NT (to support non‑x86 architectures such as MIPS, Alpha and PowerPC), Dave Plummer took on the engineering task of replacing platform‑specific pieces (notably the parts written in x86 assembly) with cross‑platform C/C++ so the game would run across NT’s supported CPU families. In doing so he wrapped the original gameplay logic with a new rendering and sound layer. That wrapper, however, had a simple but consequential design choice: its main rendering loop did not deliberately pace itself. On contemporary RISC workstations this appeared harmless; when hardware later accelerated, it did not. (en.wikipedia.org)
Across multiple retellings — Plummer’s own talks and interviews, Raymond Chen’s blog and oral accounts from other Microsoft engineers — the story is consistent: a render‑as‑fast‑as‑possible loop that was benign on older hardware became a CPU‑hogging runaway on modern, multicore machines. Raymond Chen tracked the problem down in Microsoft builds and applied a pragmatic limit, capping draw rate and immediately eliminating the pathological core pegging. Chen later called that fix one of his proudest Windows‑era triumphs. (devblogs.microsoft.com, theregister.com)
But the anecdote should also temper experimentation with caution: running obsolete, unpatched binaries on networked modern machines or exposing them to untrusted inputs poses real security and stability risks. The right approach is to run such experiments in isolated, controlled environments or emulators designed to preserve old binaries safely.
That lesson scales beyond a single arcade table shipped on Windows CD‑ROMs: it’s about defensive engineering, respect for assumptions, and the institutional memory that keeps software ecosystems maintainable across decades. (en.wikipedia.org, devblogs.microsoft.com)
Source: PC Gamer Former MS engineer Dave Plummer admits he accidentally coded Pinball to run 'at like, 5,000 frames per second' on Windows NT
Background / Overview
Space Cadet (often known to Windows users as “3D Pinball for Windows — Space Cadet”) began life as the Space Cadet table in the commercial Full Tilt! Pinball package. Microsoft licensed that table and shipped a single‑table build as a windows pack‑in across multiple releases beginning with Microsoft Plus! 95 and continuing through Windows NT 4.0, Windows 2000, Windows Me and Windows XP. The game’s ubiquity — bundled on millions of PCs in the era before ubiquitous downloads and app stores — turned a small, idiosyncratic artifact into a cultural touchstone. (en.wikipedia.org)When Microsoft ported the Space Cadet table to Windows NT (to support non‑x86 architectures such as MIPS, Alpha and PowerPC), Dave Plummer took on the engineering task of replacing platform‑specific pieces (notably the parts written in x86 assembly) with cross‑platform C/C++ so the game would run across NT’s supported CPU families. In doing so he wrapped the original gameplay logic with a new rendering and sound layer. That wrapper, however, had a simple but consequential design choice: its main rendering loop did not deliberately pace itself. On contemporary RISC workstations this appeared harmless; when hardware later accelerated, it did not. (en.wikipedia.org)
Across multiple retellings — Plummer’s own talks and interviews, Raymond Chen’s blog and oral accounts from other Microsoft engineers — the story is consistent: a render‑as‑fast‑as‑possible loop that was benign on older hardware became a CPU‑hogging runaway on modern, multicore machines. Raymond Chen tracked the problem down in Microsoft builds and applied a pragmatic limit, capping draw rate and immediately eliminating the pathological core pegging. Chen later called that fix one of his proudest Windows‑era triumphs. (devblogs.microsoft.com, theregister.com)
What exactly happened: the technical anatomy
The offending pattern: busy loop without pacing
At the heart of the issue was a classic anti‑pattern: a rendering loop that repeatedly:- Updated animation state and physics interpolation,
- Issued draw calls to the GPU or GDI,
- Immediately repeated, with no sleep, yield, timer wait, or vertical synchronization request to pace execution.
Why that matters in practice
- A single user‑mode process pegging a CPU core reduces the CPU cycles available to other processes. On single‑socket machines of the era, that could dramatically affect interactivity and background tasks.
- Busy loops defeat OS power management. A core that never idles disallows deep CPU sleep states and increases power draw and heat — a particularly painful effect for laptops and later mobile form factors.
- When physics/logic are tied to rendered frames, increased frame counts can accelerate gameplay timing (if the code uses delta time incorrectly), changing perceived game behavior or creating numerical instability.
Corroboration and verification of the key facts
- Dave Plummer has publicly described his work porting the Space Cadet table to Windows NT and explained that his wrapper rendered frames “as fast as it could,” later recounting that on modern multi‑core systems the process ended up using an entire core and drawing frames in the thousands per second. This account appears in his own talks and interviews and has been reported widely in technology press. (reddit.com)
- Raymond Chen — veteran Windows engineer and author of the long‑running “Old New Thing” column — has told the parallel story from the servicing/fix side: he discovered the uncapped loop, observed debug counters that overflowed typical displays, and implemented a frame‑rate cap (commonly recounted as 100 fps) that reduced Pinball’s CPU use to a negligible fraction and restored usability during builds. Chen has discussed this publicly as one of his satisfying fixes. (devblogs.microsoft.com, theregister.com)
- The genealogy of the game (derived from Full Tilt! Pinball and bundled across Windows 95 through Windows XP) and its eventual removal from mainstream Windows releases due to 64‑bit porting issues are documented in multiple independent sources, including the historical record and developer posts recounting collision and runtime issues encountered during 64‑bit conversions. These references confirm the timeline and the removal story. (en.wikipedia.org, theregister.com)
- The hardware context Plummer cited — MIPS R4x00 family processors running in the low hundreds of MHz — is consistent with historical processor specifications for the R4000/R4400 families, which had models operating around 100–250 MHz in the early‑to‑mid 1990s. That makes his recollection of “60–90 fps was plenty on that hardware” technically plausible. Still, the exact numeric claims about “5,000 fps” are anecdotal measurements and vary by machine; treat them as illustrative rather than laboratory‑grade metrics. (en.wikipedia.org)
Why the fix was simple — and why simple matters
Raymond Chen chose a pragmatic mitigation: add a frame‑rate limiter and let the existing game logic remain unchanged. That approach has several virtues:- Low risk: it modifies a thin wrapper rather than rewriting collision detection or the game physics engine, which were poorly understood and largely uncommented legacy code.
- Minimal behavioral change: by capping the render rate without altering the core game logic/timesteps, the play experience stays faithful to the original while eliminating pathological CPU use.
- Fast payoff: the fix immediately reduced CPU usage and allowed developers to run builds while playing Pinball — a small productivity, but a telling one.
Deeper technical analysis: what a robust design would have done
The successful, modern approach to decoupling rendering and physics falls into a small number of standard patterns. For teams maintaining any interactive simulation, these practices would have prevented the Pinball pathology:- Fixed‑timestep physics: update game physics at a fixed rate (for example, 30–120 updates per second) independent of render rate. This keeps simulation deterministic and avoids acceleration when the display refreshes faster. If rendering is more frequent than physics updates, interpolate visuals between physics snapshots.
- Cap or sync rendering: render at V‑sync or a configured maximum (e.g., 60–144 fps) and employ OS or GPU sync primitives (SwapBuffers with V‑sync, present intervals) rather than tight busy waits.
- Use sleeps/yields or high‑resolution timers: when the loop finishes a frame early, yield the thread or sleep for the remaining time slice; on modern OSes use timeBeginPeriod/QueryPerformanceCounter or equivalent high‑resolution timers cautiously.
- Profiling and telemetry: include simple built‑in diagnostics to detect runaway frame rates or pegged CPU usage during QA and CI. That way, the misbehavior gets flagged during broader platform testing rather than appearing later in the wild.
- Defensive documentation: for legacy ports, document any timing or numerical assumptions carried over from the original platform. When porting cross‑architecture binaries, note any assembly handoffs and the expected pacing implicit in the wrapper. (devblogs.microsoft.com)
- Audit any render loop for explicit pacing calls.
- Ensure the physics timestep is independent from render frequency.
- Add a configurable FPS cap for old or third‑party modules.
- Add telemetry to detect sustained CPU saturation.
- Run tests on a diverse hardware matrix (fast and slow CPUs) during CI.
What this episode reveals about software culture, risk and legacy systems
Strengths on display
- Conservative porting preserved fidelity. Plummer’s choice to keep original gameplay logic intact while replacing platform glue was prudent: it preserved the experience and avoided accidental gameplay regression across architectures. That’s a hallmark of practical engineering when dealing with third‑party IP.
- Pragmatic triage works. Chen’s small, targeted fix delivered massive practical value and exemplified the power of small, low‑risk changes over wholesale rewrites — especially when the legacy code is poorly understood.
- Institutional learning. The story is well‑documented by the practitioners themselves, providing an evergreen lesson for engineers maintaining long‑lived systems.
Risks and caveats
- Hidden assumptions are dangerous. Code written against the implicit assumption “hardware will be slow enough” is brittle. That’s the central moral of the Pinball story: environmental drift (hardware getting faster, compilers changing floating‑point defaults, different runtime libraries) will surface previously non‑problematic behaviors.
- Anecdotes vs. measurements. Memory of “5,000 fps” is colorful and useful for scale, but it’s anecdotal. Precise performance depends on machine, build, and timing environment. Treat these numbers as illustrative rather than exact.
- Legal/organizational constraints hamper remediation. The game’s origins as licensed third‑party IP and later collision detection issues during 64‑bit porting constrained Microsoft’s options; at one point the cost of a deeper rewrite outweighed the value of keeping the title in shipping images. That sort of non‑technical friction is common and shapes the lifecycle of small legacy features. (en.wikipedia.org, theregister.com)
Broader takeaways for Windows‑era engineers and game developers
- Always design timing explicitly. Never rely on side effects of rendering costs to pace logic.
- Assume the environment will change. Tests that only run on era‑typical hardware will miss pathological behaviors on faster machines.
- Favor observability. Small telemetry around frame rates and CPU usage can detect an uncapped loop as soon as someone runs nightly builds on modern hardware.
- Apply minimal mitigations when cost‑sensitive. In large products, a small cap or wrapper often creates the best balance between correctness, cost and risk.
Nostalgia, curiosity, and the temptation to experiment
The human side of the story is irresistible: the idea of running that early Windows NT build on a modern multi‑core behemoth to see whether the frame rate truly “breaks” is an amusing thought experiment. It also highlights the practical value of preservation and archiving: these small artifacts embody decades of engineering choices, shifting constraints, and corporate memory.But the anecdote should also temper experimentation with caution: running obsolete, unpatched binaries on networked modern machines or exposing them to untrusted inputs poses real security and stability risks. The right approach is to run such experiments in isolated, controlled environments or emulators designed to preserve old binaries safely.
Conclusion
The Pinball story is compact, delightful nostalgia with a durable engineering lesson: small, innocuous design choices — an absent sleep, an uncapped render loop — can quietly fossilize into big problems as the platform around them evolves. Dave Plummer’s port and Raymond Chen’s quick triage together show the lifecycle of pragmatic engineering: preserve what matters, instrument and test for assumptions, and when legacy surprises appear, prefer low‑risk, high‑impact fixes that restore system health without rewriting the entire past.That lesson scales beyond a single arcade table shipped on Windows CD‑ROMs: it’s about defensive engineering, respect for assumptions, and the institutional memory that keeps software ecosystems maintainable across decades. (en.wikipedia.org, devblogs.microsoft.com)
Source: PC Gamer Former MS engineer Dave Plummer admits he accidentally coded Pinball to run 'at like, 5,000 frames per second' on Windows NT