The software industry is in the middle of a reckoning: long-running growth in complexity, convenience-driven design choices, and economic incentives that reward feature churn have produced a landscape where many projects are bloated, fragile, and hostile to maintenance. A recent opinion roundup highlighted this trend, pointing fingers at cultural causes, popular frameworks, and specific technical catastrophes — and arguing that the first step toward reversing "software enshittification" is to name and understand the forces that cause it. What follows is a close, practical read of those arguments, the evidence behind them, and a road map for engineers and managers who want to build software that lasts.
Software complexity has been debated for decades, but the current moment feels different. For the last 30 years, inexorable increases in hardware capacity and the illusion that adding people or components will speed delivery have allowed projects to expand unchecked. At the same time, the economics of product marketing and platform lock-in reward newness over durability: shipping more features, more integrations, and more abstractions typically increases surface area for bugs and maintenance burden.
A cluster of contemporary voices — from pragmatic manifestos like Grug to pointed technical critiques of file formats and supply-chain incidents — have coalesced into a coherent narrative: we need to intentionally choose simplicity, and sometimes we must choose smaller, hand-maintainable systems over "scale at all costs" solutions.
Grug is not a technical paper; it’s a cultural call to arms. Its real power is rhetorical: by reframing software craftsmanship in blunt terms, it makes conservative engineering choices socially legible again. Grug’s popularity shows a hunger for norms that value human-scale understanding and maintenance.
Fast forward to a near-disaster: a supply-chain compromise injected a backdoor into particular builds of the xz utilities. The malicious code was narrow, triggered only under specific build conditions and environments, and targeted a small but critical library used widely across Linux distributions. The incident was contained — in part because distributions test new packages in non-production channels — but it remains a vivid example of how small, subtle attack vectors and brittle format design can compound into a systemic risk.
Lessons from the episode:
That isn’t nostalgia alone. There’s a real engineering lesson: designs that favor transparency and small, well-understood components lower cognitive load and reduce surprises. When a system can be fully inspected and reasoned about, maintenance is human-scale again.
There’s no single path back to maintainability. The pragmatic route is to combine humility about cleverness (the old programming adage about debugging being it twice as hard still rings true), discipline in design (Hoare’s warning about the two ways of constructing software — make it so simple that there are no deficiencies, or so complex that there are no obvious deficiencies — remains a useful lens), and operational practices that make systems auditable and survivable.
Software teams must learn to pick their battles: take on complexity consciously where it yields real user value, and ruthlessly avoid it everywhere else. The era of unchecked growth in software capability is ending; the coming decade will favor teams that can build less — but build it better, more secure, and easier to maintain. That is not a backward step. It is the only practical path to software that serves people, not organizational vanity.
Source: theregister.com Programmers: you have to watch your weight, too
Background
Software complexity has been debated for decades, but the current moment feels different. For the last 30 years, inexorable increases in hardware capacity and the illusion that adding people or components will speed delivery have allowed projects to expand unchecked. At the same time, the economics of product marketing and platform lock-in reward newness over durability: shipping more features, more integrations, and more abstractions typically increases surface area for bugs and maintenance burden.A cluster of contemporary voices — from pragmatic manifestos like Grug to pointed technical critiques of file formats and supply-chain incidents — have coalesced into a coherent narrative: we need to intentionally choose simplicity, and sometimes we must choose smaller, hand-maintainable systems over "scale at all costs" solutions.
Why software keeps getting more complicated
The incentive structure: marketing beats elegance
Product managers and marketing teams are rewarded for visible features and shiny demos. Feature lists that can be tweeted or shown in a slide deck attract investment and attention. That pressure means engineering teams are incentivized to add functionality — and to ship it quickly — even when a simpler design would be more maintainable.- Marketing rewards immediate differentiation.
- Teams compete on perceived capabilities, not long-term maintainability.
- The net result: complexity accumulates where it’s visible or sells.
The developer psychology: creativity and the “cleverness” trap
Engineering is a craft and creative outlet. Building complex systems can feel like an artistic achievement: clever abstractions, layered architectures, and domain-specific frameworks are gratifying to build and demonstrate technical prowess. That satisfaction feeds a bias toward sophistication, even when it harms long-term clarity.- Writing creative abstractions is intellectually rewarding.
- “Deep” technical work is mistaken for correctness or quality.
- The cultural reward system often prizes novelty over durability.
Legacy systems, technical debt, and team dynamics
Once complexity is introduced, it compounds. Legacy systems accumulate patches and workarounds; new teams inherit code they don't fully understand; and the only practical path is to add more layers rather than peel them back. Team composition and organizational incentives — including hiring that values resumes with "modern" buzzwords — further accelerate the shift.- Technical debt is rarely repaid: it grows interest in future tickets.
- More people on a tangled project typically add coordination overload rather than speed.
- Core design understandability is often sacrificed for short-term delivery.
The hardware and scaling illusion
For decades Moore’s Law and Dennard scaling allowed raw capacity increases to mask inefficiencies. As clock-speed scaling slowed and power-density limits arrived, the old tricks for making software faster by throwing hardware at it no longer deliver the same payoff. Likewise, Amdahl’s Law and classic systems thinking show that adding more cores or more engineers yields diminishing returns for many real-world workloads.- The era of “free performance” is waning.
- Parallelism and more engineers are not universal remedies for bad design.
- That forces a re-examination of where complexity is taken on.
Grug and the case for radical simplicity
What Grug is and why it matters
The Grug movement — a deliberately minimal, jokey manifesto that urges developers to prefer simple, observable, and debuggable design — has become a modern lightning rod in the debate about complexity. Written in a faux-caveman tone, Grug’s rules are deliberately pragmatic: prefer locality of behavior, minimize over-abstraction, test at useful levels, and prioritize debuggability.Grug is not a technical paper; it’s a cultural call to arms. Its real power is rhetorical: by reframing software craftsmanship in blunt terms, it makes conservative engineering choices socially legible again. Grug’s popularity shows a hunger for norms that value human-scale understanding and maintenance.
The practical lessons Grug offers
- Keep behavior close to the thing that does the work (locality of behavior).
- Prefer integration-style tests that validate system behavior over brittle unit tests that lock in implementation details.
- Avoid premature and deep abstractions; iterate until the abstraction is mature.
- Resist managerial or process rituals that enforce process over results.
A case study in "small target" risks: the XZ debacle
In 2016 a detailed technical critique argued that the XZ compression container format was poorly suited for long-term archiving due to fragile header design, optional integrity checks, and other subtle flaws. The critique was technical but unmistakably practical: complex container formats with optional features increase the probability of interoperability failures and undetected corruption.Fast forward to a near-disaster: a supply-chain compromise injected a backdoor into particular builds of the xz utilities. The malicious code was narrow, triggered only under specific build conditions and environments, and targeted a small but critical library used widely across Linux distributions. The incident was contained — in part because distributions test new packages in non-production channels — but it remains a vivid example of how small, subtle attack vectors and brittle format design can compound into a systemic risk.
Lessons from the episode:
- Complexity in widely used building blocks is a systemic threat. The more esoteric features a component exposes, the more ways an attacker or accidental corruption can exploit or break it.
- Supply-chain hygiene matters. Even highly trusted open-source projects can be targeted; human factors in maintainership (single-person maintenance, rushed merges) increase risk.
- Small, composable libraries are both a benefit and a danger. They promote reuse but magnify the blast radius of a compromise.
Retro tech and the cognitive benefits of simplicity
There’s a cultural reappraisal of older systems precisely because their simplicity made them understandable. Hobbyists and professionals alike are rediscovering that 8-bit and 16-bit-era systems are comprehensible end-to-end: the CPU, memory map, interrupts, and peripheral behavior can be held in a single engineer’s head.That isn’t nostalgia alone. There’s a real engineering lesson: designs that favor transparency and small, well-understood components lower cognitive load and reduce surprises. When a system can be fully inspected and reasoned about, maintenance is human-scale again.
- Retro systems teach discipline in interface design.
- Small systems encourage reproducible mental models, which accelerate onboarding and debugging.
- Simplicity breeds longevity.
The macro pressures: chips, geopolitics, and climate
Two engineering realities are pressing us to rethink the assumption that complexity is free.- Hardware and economic limits. The long plateau in clock speeds and the end of easy transistor density gains mean efficiency and correctness must be engineered at the software level rather than hidden behind hardware advances. The industry’s historical answer — add more parallelism or throw more engineers at the problem — faces diminishing returns.
- Geopolitical and climate fragility. The global semiconductor supply chain is geographically concentrated. Political instability or conflict in a few regions can have outsized consequences for manufacturing and distribution. Meanwhile, climate impacts are becoming more material to systems planning: infrastructure-dependent services must be resilient to extreme weather and long-term shifts. Where hardware becomes expensive or scarce, software that is maintainable by small, distributed teams becomes a strategic advantage.
Strengths and virtues of the “toward simplicity” movement
- Human-centered maintainability. Favoring simplicity reduces cognitive load, shortens onboarding, and enables faster bug isolation.
- Security gains. Fewer moving parts reduce attack surface and make supply-chain audits tractable.
- Performance and cost efficiency. Lean designs often require fewer resources and are cheaper to host and operate over time.
- Resilience. Systems built for understandability tend to degrade more gracefully and are easier to migrate.
The trade-offs and risks of embracing simplicity
Simplicity is not a panacea. There are real trade-offs:- Feature limits. Simplifying aggressively may sacrifice advanced capabilities users demand.
- Short-term productivity loss. Refactoring toward simpler APIs and removing abstraction layers can be costly in the short term.
- Organizational friction. Companies that sell complexity (consulting, proprietary features, lock-in) may resist simplification.
- Oversimplification risks. Reducing complexity poorly — omitting essential abstractions or removing necessary modularity — can produce brittle, duplicated systems that are impossible to evolve.
Practical checklist: how teams can act now
- Audit your critical dependencies. Identify widely used libraries or formats where subtle design choices have systemic impact. If a small component fails, what’s the blast radius?
- Enforce clear invariants and interfaces. Make module boundaries explicit and document expected behavior with examples and quick tests.
- Prefer local reasoning. Apply the “locality of behavior” rule: put the logic close to the data it operates on unless a cross-cutting abstraction clearly simplifies the code.
- Adopt graded testing strategies. Prioritize high-level integration tests that validate system behavior and a smaller, stable set of unit tests that target stable APIs.
- Keep the toolchain simple where it counts. Don’t bake fragile or obscure build-time tricks into production paths. Minimize the use of optional, extensible bits that conflict with deterministic builds.
- Design for auditable upgrades. Make deploys reversible and provide fast, automatic rollbacks for dependency updates that touch many systems.
- Staff for continuity. Acknowledge and staff for maintainer burnout — single-point-of-failure maintainers are a liability in critical projects.
- Measure cognitive load. Collect qualitative feedback from new engineers about the time to understand subsystems; use that as a metric for refactoring priority.
For product leaders: organizational moves that support longer-term quality
- Reward maintainability with the same KPIs used for features: uptime, mean time to repair, and reduced incident recurrence.
- Fund architectural sprints that reduce coupling and retire legacy complexity with measurable goals.
- Resist feature bloat by requiring explicit ROI or lifecycle-cost estimates for new integrations.
- Invest in documentation as a first-class deliverable — not an afterthought.
Where simplicity meets reality: pragmatic examples
- Replace sprawling front-end frameworks with minimal, focused tooling for pages that don’t need heavy client-side state.
- For high-stakes binary artifacts (archives, images), prefer historically proven formats and avoid formats that rely on optional features for integrity.
- For critical libraries, require reproducible builds and multiple independent maintainers with clear merge governance.
Conclusion
The debate about software complexity is not new, but the convergence of supply-chain attacks, hardware limitations, climate risk, and cultural incentives makes the issue urgent. The voices calling for Grug-like simplicity, for skepticism of sprawling abstractions, and for careful scrutiny of the tools at the foundation of our stacks are not nostalgic curmudgeons — they are practical realists.There’s no single path back to maintainability. The pragmatic route is to combine humility about cleverness (the old programming adage about debugging being it twice as hard still rings true), discipline in design (Hoare’s warning about the two ways of constructing software — make it so simple that there are no deficiencies, or so complex that there are no obvious deficiencies — remains a useful lens), and operational practices that make systems auditable and survivable.
Software teams must learn to pick their battles: take on complexity consciously where it yields real user value, and ruthlessly avoid it everywhere else. The era of unchecked growth in software capability is ending; the coming decade will favor teams that can build less — but build it better, more secure, and easier to maintain. That is not a backward step. It is the only practical path to software that serves people, not organizational vanity.
Source: theregister.com Programmers: you have to watch your weight, too