RSM AI Driven Adverse Event Platform in Hawai'i Uses Microsoft Azure for Real-Time Health Oversight

  • Thread Author
RSM’s deployment of an AI-driven adverse-event reporting platform in Hawaiʻi marks a clear inflection point in how state health agencies can use cloud-native Microsoft technologies to detect risks, flag unreported incidents and accelerate interventions for vulnerable populations. Early public statements from RSM describe a solution built on Microsoft Azure that integrates Azure SQL, Azure AI Foundry (Microsoft Foundry), Power BI and other Data & AI services to analyze Medicaid claims and case-management data in near real time — and the University of Hawaiʻi at Mānoa provided analytics support for model development and exploratory analysis. This article summarizes the announcement, validates the technical assertions where possible, and provides a critical analysis of the solution’s strengths, operational considerations and governance risks. The reporting synthesizes RSM’s release and additional documentation from Microsoft and the University of Hawaiʻi to cross‑check claims and expose areas that require scrutiny or further independent validation.

Cloud-based Microsoft Foundry diagram showing risk scores with Azure SQL, HIPAA, claims, and disaster recovery icons.Background / Overview​

RSM’s press release states the first-phase deployment focused on the Hawaiʻi Department of Health’s Developmental Disabilities Division (DDD), which manages services for roughly 3,600 active participants. The platform is described as ingesting claims and case-management information, applying AI to detect patterns associated with adverse events (including under‑reported incidents), and surfacing risk-scored participants to case managers through interactive dashboards. RSM reported an initial performance figure of 98.9% accuracy in detecting risk patterns. University of Hawaiʻi internal communications and news coverage corroborate the partnership and confirm that the Social Science Research Institute’s Office of Evaluation and Analytics for Intellectual/Developmental Disabilities contributed analytic expertise and dashboard development — though their public reporting places greater emphasis on the dashboard tooling and flagging of potential unreported adverse events rather than publishing independent model accuracy metrics. Microsoft’s product documentation confirms the primary platform components RSM cites. Azure includes database services (Azure SQL), interactive analytics and visualization options (Power BI and Microsoft Fabric), and a unifying AI platform formerly known as Azure AI Foundry (now described in Microsoft documentation as Microsoft Foundry), which provides model management, agents and tool integrations for enterprise AI applications. Microsoft also documents HIPAA support and BAA arrangements for many Azure and Fabric services — a necessary baseline for any solution handling protected health information (PHI).

What RSM built: technical summary​

Architecture at a glance​

  • Data ingestion: Claims and case-management systems feed cleansed, normalized datasets into a secure Azure environment (RSM cites near‑real‑time ingestion).
  • Storage and processing: Data is stored in Azure SQL and tied into Azure analytics and AI stacks for feature engineering, model scoring and aggregation.
  • AI/model layer: Models are orchestrated via Microsoft Foundry tooling (Azure AI Foundry lineage) to run detection and risk‑scoring pipelines.
  • Visualization & workflow: Power BI dashboards present risk scores, overdue review trackers, and case manager flags so staff can triage and act.
  • Security and compliance: The deployment leverages Azure’s HIPAA-capable services and disaster-recovery planning; RSM notes the environment was designed to meet statewide AI readiness efforts.

Claimed outcomes and benefits​

RSM highlights the following early impacts:
  • Early detection of high‑risk and under‑reported adverse events
  • Improved case management and care coordination
  • Enhanced service quality and participant outcomes
  • Cost savings through proactive interventions
  • A scalable framework for replication statewide and across jurisdictions
The firm asserts 98.9% accuracy in detecting risk patterns, which it positions as evidence of the platform’s effectiveness. This figure is central to the narrative of measurable public-health impact but requires independent validation.

Independent verification and cross-references​

Cross-referencing RSM’s announcement with independent sources is critical to separate program claims from verifiable facts.
  • RSM’s press release is widely syndicated through PR Newswire, MarketScreener and other press aggregation outlets; the content is consistent across outlets. This confirms RSM publicly made the claims described.
  • The University of Hawaiʻi published a local summary of the partnership that confirms UH Mānoa researchers supported development of dashboards and predictive flagging used by the Department of Health’s DDD program. UH’s coverage focuses on the operational dashboard and how it highlights potential unreported events for 3,600 participants. The University page does not independently confirm the 98.9% accuracy metric.
  • Microsoft documentation verifies the availability and purpose of the platform components cited by RSM: Microsoft Foundry (formerly Azure AI Foundry) provides enterprise AI orchestration and tooling; Power BI / Fabric are part of Microsoft’s analytics/compliance stack; and Azure supports HIPAA-related business associate agreements (BAAs) and other compliance frameworks that healthcare organizations rely on.
Taken together, these sources confirm (a) the partnership exists, (b) the platform is built on Microsoft Azure services that are commonly used for HIPAA-capable deployments, and (c) UH Mānoa contributed analytics and dashboard development. What remains less verifiable in the public record is the internal benchmark and methodology behind the 98.9% accuracy claim — the press release states the number, but neither RSM nor UH Mānoa published a methodology, dataset description, validation split, false positive/negative rates, nor peer-reviewed evaluation that would allow independent assessment.

Why this matters: strengths and opportunities​

1. Real‑time, data‑driven supervision for high‑risk populations​

People with intellectual and developmental disabilities often depend on coordinated services that can be fragmented across providers and payers. Automated pattern detection that surfaces probable adverse events or at‑risk individuals can reduce latency between incident and intervention, improving safety and outcomes.
  • Benefits include reduced time to detection, better triage of limited case management resources, and the possibility of preventive outreach rather than reactive remediation. RSM’s dashboards appear designed to operationalize those benefits by placing actionable insights directly in case managers’ workflows.

2. Enterprise-grade platform components​

Building on Azure, Power BI and Microsoft Foundry gives the project access to mature tools for data storage, governance, model lifecycle and visualization. Microsoft’s cloud ecosystem supports BAAs, compliance audits and widely adopted security controls — all of which reduce the institutional friction of deploying PHI workloads in the cloud when configured correctly.

3. Cross‑sector collaboration model​

The involvement of state government, a consulting firm (RSM) and university researchers creates a model where domain expertise, technical engineering and operational oversight can coalesce. When executed well, this combination accelerates iterative improvement, helps train local analytics capacity, and creates educational opportunities for students working on real public-health problems.

4. Potential for scalable replication​

If the platform’s pipelines rely on standard claims and case‑management schemas, the architecture could be adapted to other states or program domains. A modular approach using Foundry templates and Power BI assets would make reuse and scaling more straightforward for large‑scale public health modernization projects.

Risks, caveats and areas that need independent validation​

The promise of faster detection must be weighed against a set of technical, ethical and operational risks:

1. Unverified accuracy claim and the risk of false confidence​

The cited 98.9% accuracy figure is compelling but opaque. Critical questions that must be answered before scaling include:
  • What is the operational definition of “accuracy” for this application (per‑event, per‑participant, recall, precision)?
  • What data were used for training and validation, and were they held out from the training process?
  • How does the model perform across demographic subgroups and care settings (does accuracy drop for certain populations)?
  • What are false positive and false negative rates, and what are the expected human workload impacts of each?
Without transparent validation and third‑party review, the number risks being misinterpreted by policymakers, who may assume near‑perfect detection when edge cases and distributional shifts inevitably occur. The University of Hawaiʻi reporting does not publish those model evaluation details.

2. Data quality, coding heterogeneity and signal limitations​

Claims and case‑management systems are not designed primarily for predictive analytics. They contain noisy, delayed and sometimes incomplete information. Models trained on historical claims patterns may pick up administrative artifacts rather than true clinical signals. That increases the risk of both missed events (if relevant signals are absent) and spurious alerts (if models learn to associate paperwork patterns with clinical events). Robust feature engineering, clinical validation and error‑analysis are required to avoid operational disruptions.

3. Privacy, governance and model lifecycle management​

Using Azure services that are covered by HIPAA BAAs reduces, but does not eliminate, governance responsibilities. The healthcare organization remains responsible for:
  • Contractual assurances (BAA scope, data use, retention)
  • Data minimization and masking where possible
  • Explainability and audit trails for model decisions
  • Continuous monitoring for model drift, performance regression and fairness issues
Microsoft’s compliance guidance makes clear that platform capabilities alone don’t establish compliance; customer configuration and operational controls matter.

4. Human workflow integration and decision authority​

AI‑driven flags must be integrated into workflows so they augment professional judgment rather than automate it incorrectly. Clear standard operating procedures (SOPs) must define how case managers should triage AI flags, verify them, and escalate. Poorly designed workflows risk alert fatigue, where staff ignore high‑value flags because too many false positives erode trust.

5. Vendor lock‑in and portability​

A stack that relies heavily on Microsoft Foundry, Azure SQL and Power BI can speed deployment but also creates migration considerations. Contracts should clarify data exportability, long-term storage, and portability of models and dashboards so the state can migrate or replicate the solution with different vendors in the future without losing audit trails or historical context. Microsoft Foundry and Power BI provide tooling to export artifacts, but contractual and operational planning is necessary.

6. Regulatory oversight and classification​

AI systems used in healthcare increasingly attract regulatory attention. The FDA and other agencies are sharpening guidance for AI in clinical contexts; while this project is focused on program oversight rather than direct clinical decision making, regulators will expect documented risk management, safety cases and post‑deployment monitoring for systems that shape care. RSM’s press release mentions disaster recovery and statewide AI‑readiness assessments but does not detail governance artifacts or ongoing monitoring plans.

Operational recommendations and best practices for production-grade deployments​

To convert pilot success into sustainable, safe operations, public agencies should treat the platform like any mission‑critical health IT system and follow a disciplined playbook:
  • Establish governance: Create a multidisciplinary steering committee with clinical leads, data scientists, privacy officers, legal counsel and case‑management representatives to oversee deployment and change control.
  • Publish evaluation metrics: Release an evaluation brief describing datasets, validation methodology, and error rates (precision, recall, AUC) so independent reviewers can assess performance.
  • Implement staged rollouts: Start with focused pilot cohorts, measure real-world precision/recall and human workload impact, then expand progressively.
  • Define SOPs: Articulate exactly how case managers should respond to flags, including verification steps, escalation criteria, and documentation requirements.
  • Monitor drift and fairness: Implement automated monitoring pipelines to detect input/data‑distribution drift and track model performance by demographic slices.
  • Lock down data governance: Ensure BAAs, retention policies, encryption at rest/in transit, role‑based access control, auditing and data‑deletion workflows are configured and audited regularly.

Policy implications and public‑sector considerations​

This deployment illustrates key policy debates for state governments adopting AI in health oversight:
  • Transparency vs. speed: Agencies must balance rapid adoption with the need for transparent evaluations that earn public trust.
  • Equity: Automated risk scoring must be scrutinized for disparate impacts; underserved groups may be under‑represented in training data, creating blind spots.
  • Procurement: State procurement frameworks should require vendor transparency around data use, third‑party audits and portability clauses to avoid long‑term vendor lock‑in.
  • Capacity building: Universities and local institutions (like UH Mānoa in this case) can build analytic capacity and provide independent validation — a model other states may want to replicate.

Conclusion​

RSM’s AI‑powered adverse‑event reporting platform for Hawaiʻi’s Developmental Disabilities Division represents a credible and pragmatic application of cloud AI to public health oversight. The architecture leverages mature Microsoft technologies — Azure, Microsoft Foundry, Azure SQL and Power BI — and the cross‑sector collaboration with the University of Hawaiʻi provides domain grounding and local analytic input. These components make the solution operationally feasible and legally tractable under existing HIPAA‑capable cloud frameworks. However, the project’s most headline‑grabbing metric — the 98.9% accuracy figure — is currently an internal claim that lacks publicly available, independently verifiable detail on methodology and error rates. Before the model is scaled statewide or exported to other jurisdictions, independent evaluation, published performance metrics and well‑documented governance controls are essential. Without these, there is a real danger of overconfidence in model outputs, unfair outcomes for specific subgroups, and operational stress from false positives or negatives. The deployment still stands as a useful case study: it demonstrates how a consulting firm, a public health agency and an academic partner can combine engineering, domain expertise and platform services to address under‑reported adverse events — a persistent challenge in social‑care programs. The coming months should reveal whether the platform’s early promise translates into measurable reductions in harm and sustained improvements to participant outcomes, and whether the state and its partners publish the evaluation evidence needed to move from pilot to trusted public health practice.

Source: RSM US LLP RSM Deploys AI-Powered Health Reporting Solution Using Microsoft Technologies to Support Hawai'i State Department of Health Program | RSM US
 

Back
Top