ADS Retirement and SCOM Deprecation Push SQL Tooling Toward VS Code and Azure Monitor

  • Thread Author
Microsoft’s latest lifecycle moves have quietly — and in some cases not so quietly — tightened the noose on on‑premises SQL tooling and monitoring, forcing many organizations to rethink long‑standing architectures and operational contracts. Two separate but complementary actions define the moment: Microsoft’s retirement of Azure Data Studio (ADS) and its deprecation of key SCOM Management Packs for SQL Server reporting and analytics workloads, each nudging (and in places compelling) customers toward Azure Monitor, Azure Arc, and the Visual Studio Code ecosystem. The net effect is a clear product‑strategy arc: consolidate developer tools into VS Code, and consolidate monitoring and telemetry into Azure — a cloud‑first push with practical and fiscal consequences for enterprises that deliberately maintain on‑prem infrastructure.

Blue-toned data center workspace with server racks and a monitor displaying code and cloud diagrams.Background: what changed and where it matters​

Microsoft has published two separate, authoritative notices that together reshape the tooling and telemetry story for SQL Server customers.
  • Azure Data Studio will be retired — the retirement announcement in early February 2025 and ADS will be supported only through February 28, 2026. Microsoft’s guidance is explicit: migrate daily SQL work to Visual Studio Code and the MSSQL extension.
  • System Center Operations Manager (SCOM) management packs for SQL Server Reporting Services (SSRS), Power BI Report Server (PBIRS) and SQL Server Analysis formally deprecated in January 2026 and will reach end of support in January 2027. Microsoft’s public guidance recommends using Azure Monitor combined with Azure Arc and Log Analytics** as the replacement monitoring strategy for those workloads.
Both moves are part of broader product shifts: SQL Server 2025 changes reporting defaults (Power BI Report Server becomes the primary on‑prem offering) and the vendor is re‑architecting how telemetry and lifecycle support are delivered for hybrid and on‑prem SQL components. That context is critical when evaluating the technical and operational impact on existing estates.

Why this matters: practical impact on operations​

Short answer: the two announcements together reduce the number of fully on‑premises, vendor‑supported end‑to‑end workflows for SQL reporting, analytics and developer tooling. For many organizations — particularly regulated, sovereign, or heavily air‑gapped environments — the implications are operational, financial and compliance‑sensitive.

Monitoring and incident detection changes​

  • SCOM MPs deprecation removes Microsoft’s maintained SCOM logic for SSRS, PBIRS and SSAS after January 2027. Existing MPs may continue to run in supported SCOM versions, but Microsoft will not publish fixes, updates, or compatibility guarantees for future SQL or SCOM releases. That makes long‑term reliance on those MPs a risk.
  • Microsoft’s recommended path is Azure‑centric: enable Azure Arc on the host, install the Azure Monitor Agent (AMA), forward telemetry to a Log Analytics Workspace, and use Azure Monitor Workbooks and alerts in place of classical SCOM rules. For organizations that intentionally keep telemetry off cloud services, this is a material shift.
  • “Mandatory” is a loaded term. Microsoft is not physically blocking on‑prem monitoring, but by removing active, vendor‑maintained SCOM MPs and by making Azure solutions the supported path for modern compatibility, the practical choices become constrained. Treat any claim that Azure is legally required as interpretive — the vendor’s policy steers customers to Azure but does not technically force every monitoring flow into the cloud. That nuance matters in procurement and compliance conversations.

Developer tooling and cross‑platform workflows​

  • Azure Data Studio has been a cross‑platform favourite for database developers and analysts because it combined lightweight query editing, notebooks, and many database utilities across Windows, macOS and Linux. Its formal retirement removes that dedicated, supported cross‑platform editor. Microsoft’s explicit recommendation is moving workloads to VS Code + MSSQL extension — a different user experience with tradeoffs around extension parity and some missing features at launch.
  • Several ADS features are handled differently post‑retirement: some administration tasks (SQL Server Agent job management, heavy profiling, deep SSMS admin functions) remain Windows‑centric in SQL Server Management Studio (SSMS), while the VS Code MSSQL extension will cover a growing, but not yet feature‑complete, set of developer workflows. Expect temporary feature gaps during the transition period.

Licensing, billing and cost profile​

  • Cloud‑native monitoring typically moves costs from capital (one‑time or predictable maintenance contracts) to operational consumption (ingress/ingestion charges, log retention tiers, alert executions). That can be cheaper at scale or more expensive for heavy telemetry workloads. Budgeting must shift accordingly. Windows Report and other outlets have raised cost and training concerns tied to this shift.

Strengths of Microsoft’s approach​

Microsoft’s design decision is not without benefits. The vendor’s rationale for consolidation has practical merits that enterprises should weigh.
  • Unified engineering focus. Consolidationdating the SQL development surface into VS Code concentrates feature investment and security updates in one modern editor that has a large extension ecosystem and active community. That can accelerate delivery of advanced features over ADS’s independent lifecycle.
  • **Cloud‑centric telemetry editor + Azure Arc + Log Analytics is a scalable, centrally managed telemetry solution. It supports long‑term storage, advanced analytics, correlation across hybrid environments, and easier integration with cloud‑native SIEM and AI features. For organizations that already use Azure or intend to modernize, the approach reduces tooling diversity and can simplify observability architectures.
  • Modern feature stack for reporting. Consolidating on Power BI Report Server (PBIRS) as the primary on‑prem reporting engine for SQL Server 2025 brings RDL and PBIX support into a single platform and aligns on‑prem reporting more closely with Power BI’s cloud capabilities. That can ease migrations to cloud analytics over time.

Risks, gaps and hard realities​

There are clear and immediate risks that administrators must treat as active items in 2026 planning cycles.
  • Feature parity gaps. The MSSQL extension on VS Code does not instantly replicate every ADS feature; and it certainly does not replace some SSMS‑only admin functions. Expect operational workarounds and scripts during migration windows. Track critical missing capabilities and allocate time for either feature requests or alternative tooling.
  • Data sovereignty and compliance friction. For customers legally constrained to keep telemetry on‑prem, the Azure Monitor recommendation is a poor fit without hybrid mitigations. While Azure Arc enables hybrid registration without moving data into Microsoft’s cloud by default, many monitoring features assume cloud storage and processing. Legal teams and auditors must review whether Azure Arc/Log Analytics configurations meet policy requirements. If they do not, organizations face an uncomfortable choice: remain on older releases (with accumulating risk), pay for third‑party monitoring stacks, or re‑engineer to meet compliance demands.
  • Operational retraining and migration debt. Shifting monitoring paradigms, touching as many nodes as reporting servers and analysis clusters, is non‑trivial. The migration path includes enabling Azure Arc, installing agents, re‑creating alert logic, and building new dashboards — a multi‑quarter program for many organizations. The upfront labor and potential surprise costs (data ingestion, retention) must be modeled now.
  • Vendor risk of single‑stack dependency. Consolidating into Azure and VS Code increases dependency on a single vendor’s stack. That dependency can improve integration but concentrates risk if product direction shifts again. Organizations must evaluate vendor lock‑in against the operational benefits. This is a governance decision as much as a technical one.

How to plan a pragmatic migration — concrete steps​

Below is a prioritized, practical plan for IT teams facing these changes. Treat the list as a program checklist rather than a prescriptive project plan.
  • Inventory (30–60 days)
  • Capture every service that depends on ADS, SCOM MPs, SSRS, PBIRS, and SSAS.
  • Tag services by criticality, compliance impact, and owner.
  • Identify any legacy automation that uses SCOM MPs or ADS‑specific extensions.
  • Short‑term stabilisation (60–120 days)
  • For ADS users: set up VS Code with the MSSQL extension, import Database Projects and notebooks, and validate critical developer workflows. Maintain ADS in parallel until Feb 28, 2026 for contingency.
  • For SCOM MP consumers: document the MP rules you rely on and map each rule to an equivalent telemetry signal (PerfCounter, Event, custom log) that can be ingested into Azure Monitor.
  • Proof of Concept (90–180 days)
  • Build an Azure Arc POC for a set of non‑critical SSRS/PBIRS/SSAS servers and ingest telemetry to a Log Analytics workspace.
  • Recreate a small but representative set of alerts and dashboards using Azure Monitor Workbooks.
  • Measure costs for ingestion and retention; tune Data Collection Rules to balance visibility and cost.
  • Migration runway (6–18 months)
  • Prioritize migration of production instances by business impact and regulatory constraints.
  • For sovereign or isolated environments, evaluate on‑premises third‑party monitoring (Prometheus + Grafana + probes, Splunk on‑prem, or private SIEMs) if Azure Arc cannot meet policy constraints.
  • For developer teams, roll out standardized VS Code extensions, recommended settings, and training materials to minimize friction.
  • Long‑term governance and optimization (ongoing)
  • Implement telemetry cost governance: tag log sources, set retention tiers, and automate DCR lifecycle.
  • Maintain an “escape hatch” for critical monitors — a documented fallback that works without cloud ingestion if needed.
  • Reassess tooling annually; vendor roadmaps change and the migration effort should be opportunistic alongside feature improvements.

Migration checklist: minimal viable monitoring (MVM)​

  • Enable Azure Arc on one or more test servers.
  • Install the Azure Monitor Agent (AMA) and verify data arrives in Log Analytics.
  • Create Data Collection Rules (DCRs) for key counters (service health, CPU, disk, event logs).
  • Rebuild at least one critical alert with an equivalent severity and escalation path.
  • Validate dashboarding and runbook automation for incident remediation.
This “MVM” ensures you can maintain situational awareness while you plan full migration.

Vendor messaging vs. practical choice: analysis​

Microsoft’s public messaging frames these changes as modernization and consolidation: fewer, better‑supported tools; improved innovation velocity; better integration with cloud AI and telemetry services. Technically, those are defensible reasons. VS Code has a massive ecosystem, and Azure Monitor + Arc offers powerful hybrid telemetry capabilities. However, the timing and cumulative effect of multiple retirements create real operational friction. A cluster of retirements across 2025–2027 (Azure Data Studio retirement; SCOM MPs end of support; Office Online Server and other on‑prem web components retiring) compress migration timelines and increase coordination costs across security, compliance, and application teams. The realistic outcome for many organizations will be a mix of:
  • Quick adoption where cloud is allowed and cost‑effective.
  • Extended co‑existence where legacy tools remain in place until application compatibility and compliance lifting is completed.
  • Third‑party or homegrown monitoring bridging the gaps in sensitive environments.
Callouts:
  • Any assertion that Microsoft “is forcing” Azure for all customers is an oversimplification; the vendor is deprecating certain on‑prem support paths and recommending Azure alternatives — a strong nudge, but not an absolute technical lockout. Treat “force” as shorthand for “removal of vendor‑maintained on‑prem alternatives that materially increases the cost and risk of staying completely off‑cloud.”

Recommendations for CIOs and platform owners​

  • Treat 2026 as a deadline year for decisions, not just a planning horizon. ADS support ends Feb 28, 2026 and SCOM MPs for SSRS/PBIRS/SSAS are unsupported after Jan 2027; both dates should be explicit gates in migration programs.
  • Run a cross‑functional risk review (security, legal, procurement, operations) to quantify the compliance and cost delta between on‑prem alternatives and Azure Arc/Monitor.
  • Negotiate with vendors and partners early: managed service providers and SI vendors may offer hybrid observability platforms or fixed‑price ingestion options that reduce the shock of consumption billing.
  • Invest in runbook automation: the migration is a people problem as much as a tooling problem. Automated remediation reduces incident overhead during the transition.

Conclusion: a modernization fork — plan, don’t panic​

Microsoft’s twin moves — retiring Azure Data Studio and deprecating SCOM management packs for SSRS/PBIRS/SSAS — are consistent with a strategy to centralize development and monitoring investments where Microsoft sees greatest long‑term value: VS Code for developers and Azure for telemetry. For organizations already aligned to Azure, the changes will accelerate modernization and simplify some long‑term operational choices. For organizations where cloud is constrained by policy or budget, the path is messier: more work, more risk, and likely a hybrid mix of on‑prem and cloud solutions for some time.
The practical advice is straightforward: inventory, POC, quantify costs, and build contingency plans. The vendor has provided migration guidance and recommended alternatives; however, the timing, scale and cumulative nature of these retirements means that thoughtful planning and cross‑team coordination will determine whether your organization reaps the benefits of modernization or pays an avoidable tax in time, money and operational risk.
Source: Windows Report https://windowsreport.com/microsoft...ing-as-sql-server-tools-reach-end-of-support/
 

John Donovan’s December 2025 experiment — feeding decades of adversarial material about Royal Dutch Shell into multiple public AI assistants and publishing the divergent outputs — transformed a long‑running supplier feud and documentary archive into a live test of how generative systems handle contested archives, and in doing so exposed a set of practical governance failures that lawyers, platform designers, corporate boards and journalists must now confront..

A blue-toned display of four screens labeled Hallucination, Provenance, Rebuttal, and Governance and Transparency.Background​

From a supplier dispute to an adversarial archive​

The Donovan–Shell story begins in commerce: a 1990s dispute between Don Marketing (the Donovan family business) and Shell over promotional work evolved into litigation, domain fights and a decades‑long online campaign by John and his relatives. Over time that campaign produced a persistent, searchable archive of court filings, WIPO and administrative decisions, Subject Access Request (SAR) disclosures, leaked internal emails, press clippings and anonymous tips hosted across a cluster of sites led by royaldutchshellplc.com. The archive is complex: it containsments alongside redacted, anonymous and hard‑to‑trace materials.
The domain dispute is a public, formal anchor in that history: a World Intellectual Property Organization (WIPO) UDRP panel considered the royaldutchshellplc.com claims in Case No. D2005‑0538, a decision that is part of the public administrative record.

Archive as a public resource and a provocation​

Donovan’s sites have repeatedly been used as leads by mainstream outlets. In 2009, leaked internal Shell emails published on royaldutchshellplc.com were referenced in syndicated Reuters coverage about internal cost‑cutting and safety concerns, demonstrating that a small, persistent archive can seed major reporting cycles. At the same time, the archive mixes Tier A materials (SAR attachments) with Tier C items (anonymous tips, redacted memos) that demand caution.
Those two facts — public utility and variable provenance — are the starting point for the December 2025 experiment that reframed the Donovan archive as a new kind of rreputational risk.

What happened in December 2025: the AI experiment explained​

Two posts and one deliberate test​

On December 26, 2025 John Donovan published two complementary pieces that were intentionally performative: “Shell vs. The Bots: When Corporate Silence Meets AI Mayhem” and a satirical roleplay titled “ShellBot Briefing 404.” Both posts explicitly describe feeding identical prompts and curaarchive into several public AI assistants (identified by Donovan as Grok/xAI, ChatGPT/OpenAI, Microsoft Copilot and Google AI Mode) and publishing the side‑by‑side outputs to highlight divergence.
The experiment had three tactical goals:
  • Turn archival persistence into machine‑readable fuel for retrieval‑augmented generation (RAG) systems.
  • Force cross‑model comparisons that make hallucinations and model disagreement visible to readers.
  • Convert a niche adversarial archive eputational threat by leveraging algorithmic amplification.
Each goal was met, in part, because the archive is both large and well‑organised — precisely the qualities that make it attractive to retrieval pipelines — and because the experiment packaged the model outputs as newsworthy artifacts rather than private tests.

The headline incident: a model hallucination and a correction​

The most concrete point of friction was a single, emotionally charged hallucination. In Donovan’s published comparison, one assistant (reported publicly as Grok) generated a confident biographical claim that Alfred Donovan — John’s father — had died “from the stresses of the feud.” This claim contradicted obituary records and Donovan’s own account that Alfred died in July 2013 after a 6; another assistant (ChatGPT, per Donovan’s transcripts) corrected the claim and cited documented sources. The contrast — one model inventing a dramatic causal link, another debunking it — became a vivid demonstration of how models optimise for coherent narrative rather than rigorous provenance.

Why this matters: three interlocking risks exposed​

1) Hallucination becomes reputational harm​

Generative models are trained and nt, persuasive prose. When they are given partial, emotionally resonant material, they are likely to fill gaps with plausible‑sounding but unverified details. The Donovan episode shows how a single hallucination about a sensitive personal fact can be amplified into a circulating claim that is hard to fully retract once it reaches other platforms, aggregators or human readers who treat AI text as authoritative.

2) Feedback loops and authority laundering​

A generator’s output often re‑enters the public web (through social posts, articles, or cached pages re-ingested by other models and services. That creates a feedback loop where an invented line can be treated as input evidence by later systems — a facts‑by‑iteration problem. Donovan’s public side‑by‑side transcripts turn the entire debate into feedstock for other assistants and human curators, making it easier for a hallucination to morph into de facto “truth” in downstream contexts.

3) Corporate silence is no longer neutral​

Historically, corporations often adopt a posture of legal restraint or strategic silence toward adversarial critics: litigate when necessary, avoid amplifying the critic with heavy legal action, and let the story fade. The AI era complicates that calculus. When an activist intentions assistants with an archive, silence leaves a provenance vacuum that models and third parties will readily fill. Donovan framed this directly: Shell might ignore a website, but it cannot ignore the machine‑orchestrated narratives that synthesise archival material into viral form. That observation reframes silence from a defensive tactic into a potential risk amplifier.

Disentangling what’s provable from what’s plausible​

The Donovan archive contains material of varying evidentiary weight. For responsible reporting and governance, the public record can be triaged into three useful categories:
  • Tier A — Verifiable anchors: court filings, WIPO decisions, regulator records and contemporaneous press reports. These should be treated as high‑confidence evidence when independently corroborated. The WIPO UDRP decision in Case No. D2005‑0538 is a clear example of a Tier A public document.
  • Tier B — Documentary but contested items: correspondence, internal emails and SAR disclosures that exist but may be subject to interpretive dispute. These are useful for context but demand careful citation and full contextualisation. Reuters’ 2009 reporting based on leaked emails that Donovan posted is an example where Tier B materials seeded mainstream coverage.
  • Tier C — Pattern and attribution claims: operational espionage, named covert actions and allegations drawn primarily from anonymous tips or redacted memos whose chain‑of‑custody cannot be independently reconstructed. The archive contains multiple Tier C items — especially claims about private intelligence activities directed at activists — that remain plausible but not fully proven in public records. These items should be explicitly labelled as allegations.
Flagging unverifiable claims is essential because generative models inherently collapse nuance unless provenance metadata is explicit.

Cross‑checking key claims: independent corroboration​

  • WIPO: The administrative panel decision in Case No. D2005‑0538 is publicly available in the WIPO database and documents the domain dispute involving royaldutchshellplc.com. That decision is a primary anchor in the procedural history.
  • Mainstream reporting: Donovan’s site was cited in syndicated Reuters stories in 2009 that discussed leaked internal Shell emails and internal cost‑cutting signals; those items show the archive’s capacity to generate legitimate news leads. Reuters‑linked coverage referencing royaldutchshellplc.com is recorded in Donovan’s news‑collation pages and in contemporaneous media archives.
  • Private intelligence reporting: Historical allegations about Hakluyt and the use of forers in surveillance operations have been reported in national press outlets (for example, coverage of an operative codenamed “Camus” / Manfred Schlickenrieder in 2001), corroborating the pattern of private intelligence engagement with energy firms even where specific acts remain contested. Independent reporting from continental outlets documented those episodes decades earlier. ([taz.de](Präsidentenwahlen im Iran: Ahmadinedschad sieht sich als Opfer
Where Donovan’s postings claim operational details about surveillance or burglaries targeted at him personally, public records and independent press reporting do not uniformly reproduce every specific allegation; remain in need of further forensic or judicial corroboration and should be reported as contested.

Structural failures revealed (and where fixes are needed)​

For AI vendors and platform operators​

  • Provenance by default: Retrieval‑augmented pipelines should attach source metadata for every asserted fact — including documtamps, and confidence markers — and make that metadata visible to users. Donovan’s experiment illustrates how opaque retrieval turns contested archives into authoritative‑sounding prose.
  • Hedging defaults for living persons: When a model summarizes materials about living persons or sensitive incidents lacking Tier A anchors, the default should be conservative language with explicit disclaimers. The accidental inventio claim demonstrates why hedging must be productised.
  • Audit logs and exportable contexts: Platforms should let users (and regulators) export the exact prompt, model version, retrieval context and timestamps used for a particular output to enable reproducibility and redress. Donovan’s public transcripts would be more audit‑useful if retrieval logs and confidence scores were rporate communications, legal teams and boards
  • AI triage and rapid rebuttal: Corporations need a 72‑hour AI triage stream to log and assess viral model outputs that involve the company or named individuals, assign owners for verification, and publish concise documentary rebuttals where Tier A eviden a tactical choice, but it must be weighed against the speed of AI‑driven amplification.
  • Transparency on private intelligence: Where companies retain third‑party intelligence vendors, boards should require documented legal, ethical and reputational sign‑offs and consider public disclosure of oversight frameworktern of private intelligence engagements in the energy sector makes these practices a foreseeable source of reputational blowback.

For journalists and researchers​

  • Treat model outputs as leads, not facts: Every model claim that could materially harm a reputation or alter public understanding must be reverdocuments. Preserve prompts, retrieval contexts and outputs as part of the editorial audit trail.
  • Explicit labelling and context: When summarising contested archives, present the documentary anchors and the limits of provenance alongside any AI outputs to avoid substituting model disagreement for sourcing. Donovan’s side‑by‑side transcrbut insufficient without primary‑source anchoring.

Practical playbook: immediate steps for each stakeholder​

  • For AI vendors:
  • Ship provenance metadata with every factual claim in RAG outputs.
  • Default to hedged language for biographical or legal assertions absent Tier A anchors.
  • Offer exportables for audit and redress.
  • For corporate counsel/communications:
  • Stand up a rapid‑response AI triage channel and assign a verifiable owner for claims involving living persons.
  • Publicly publish Tier A rebuttal packages (redacted where necessary) tied to specific modelled claims.
  • Reassess policies for private intelligence vendor retention and oversight.
  • For journalists/researchers:
  • Use adversarial archives as lead generators; always seek Tier A corroboration before amplification.
  • Archive and publish retrieval metadata and prompts used when AI tools contribute to reporting.
  • Label unverifiable items as allegations and preserve editorial disclaimers.

Strengths and ethical benefits of Donovan’s approach (even where it is provocative)​

Donovan’s method — converting a sprawling archive into a readable dataset and staging cross‑model comparisons — is, in itself, a form of public pedagogy. It makes model failure modes visible to ordinary readers and forces platforms to reckon with practical design choices. The experiment demonstrates three positive functions:
  • Transparency pressure: it compels corporate actors and platforms to articulate provenance and verification standards.
  • Diagnostic value: cross‑model disagreement highlights contrasting design trade‑offs (narrative fluency vs. source grounding).
  • Democratisation of scrutiny: small actors can use low‑cost tools to surface documents that otherwise would be buried in dockets or leaked caches.
Those benefits do not negate legal or ethical responsibilities: activists publishing contested material must be explicit about provenance and preserve audit trails so that downstream users can verify, challenge or correct the record.

defamation risk
Publishing internal emails, SAR outputs and court filings is often lawful when the materials are genuine, but republication still carries defamation and data‑protection risks if assertions go beyond what the documentary evidence supports. Donovan’s archive has previously triggered legal skirmishes (domain disputes at WIPO, defamation threats and administrative proceedings), underscoring the legal tightrope small publishers walk when combining named documents with anonymous tips. The prudent posture for newsrooms and platforms is to apply a higher verification bar before republishing incendiary claims from Tier C materials.

The path forward: governance, design and the human judgment that machines cannot replace​

The Donovan–Shell bot war is not a technical curiosity — it is an operationalals that:
  • Machines amplify and organise, but they do not adjudicate provenance.
  • Corporate silence has consequences in the age of generative assistants.
  • Editorial and product safeguards (provenance metadata, hedging defaults, audit exports) are implementable and necessary.
Fixing these problems will not be a single vendor update; it will require coordinated changes across newsroom practice, platform design and corporate governance. The most important immediate change is cultural: insist that every AI‑assisted public claim be traceable to a Tier A anchor or flagged as an unverified allegation. That simple rule restores human judgment as the final arbiter between machine fluency and public fact.

Conclusion​

John Donovan’s December 2025 experiment demonstrates that archival persistence plus generative AI equals a new vector for reputation‑shaping narratives: one that is fast, reproducible and perilously indifferent to provenance. The technical fix is straightforrovenance metadata, hedged outputs and exportable audit trails — but implementing those fixes requires institutional will across AI vendors, publishers and corporate boards.
The Donovan archive will remain a live case study: a hybrid beast made of verifiable public records and contested, anonymous claims. Its newest trick — turning archival weight into machine‑readable authority — has sharpened the policy conversation in a way that court cases and domain disputes alone never did. The remedy is not silence or suppression; it is transparency, verification and a governance architecture that re‑centres human judgment at every stage a model touches the public square.

Source: Royal Dutch Shell Plc .com More Than Dynamite: How AI Reframes the Donovan–Shell Archive as Persistent Risk
 

Back
Top